WorldWideScience

Sample records for high sensitive analysis

  1. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  2. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  3. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  4. High order effects in cross section sensitivity analysis

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  5. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  6. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  7. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  8. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    Science.gov (United States)

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  9. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  10. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  11. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  12. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  13. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  14. Resonance analysis of a high temperature piezoelectric disc for sensitivity characterization.

    Science.gov (United States)

    Bilgunde, Prathamesh N; Bond, Leonard J

    2018-07-01

    Ultrasonic transducers for high temperature (200 °C+) applications are a key enabling technology for advanced nuclear power systems and in a range of chemical and petro-chemical industries. Design, fabrication and optimization of such transducers using piezoelectric materials remains a challenge. In this work, experimental data-based analysis is performed to investigate the fundamental causal factors for the resonance characteristics of a piezoelectric disc at elevated temperatures. The effect of all ten temperature-dependent piezoelectric constants (ε 33 , ε 11 , d 33 , d 31 , d 15 , s 11 , s 12 , s 13 , s 33 , s 44 ) is studied numerically on both the radial and thickness mode resonances of a piezoelectric disc. A sensitivity index is defined to quantify the effect of each of the temperature-dependent coefficients on the resonance modes of the modified lead zirconium titanate disc. The temperature dependence of s 33 showed highest sensitivity towards the thickness resonance mode followed by ε 33 , s 11 , s 13 , s 12 , d 31 , d 33 , s 44 , ε 11 , and d 15 in the decreasing order of the sensitivity index. For radial resonance modes, the temperature dependence of ε 33 showed highest sensitivity index followed by s 11 , s 12 and d 31 coefficient. This numerical study demonstrates that the magnitude of d 33 is not the sole factor that affects the resonance characteristics of the piezoelectric disc at high temperatures. It appears that there exists a complex interplay between various temperature dependent piezoelectric coefficients that causes reduction in the thickness mode resonance frequencies which is found to be agreement in with the experimental data at an elevated temperature. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Project Title: Radiochemical Analysis by High Sensitivity Dual-Optic Micro X-ray Fluorescence

    International Nuclear Information System (INIS)

    Havrilla, George J.; Gao, Ning

    2002-01-01

    A novel dual-optic micro X-ray fluorescence instrument will be developed to do radiochemical analysis of high-level radioactive wastes at DOE sites such as Savannah River Site and Hanford. This concept incorporates new X-ray optical elements such as monolithic polycapillaries and double bent crystals, which focus X-rays. The polycapillary optic can be used to focus X-rays emitted by the X-ray tube thereby increasing the X-ray flux on the sample over 1000 times. Polycapillaries will also be used to collect the X-rays from the excitation site and screen the radiation background from the radioactive species in the specimen. This dual-optic approach significantly reduces the background and increases the analyte signal thereby increasing the sensitivity of the analysis. A doubly bent crystal used as the focusing optic produces focused monochromatic X-ray excitation, which eliminates the bremsstrahlung background from the X-ray source. The coupling of the doubly bent crystal for monochromatic excitation with a polycapillary for signal collection can effectively eliminate the noise background and radiation background from the specimen. The integration of these X-ray optics increases the signal-to-noise and thereby increases the sensitivity of the analysis for low-level analytes. This work will address a key need for radiochemical analysis of high-level waste using a non-destructive, multi-element, and rapid method in a radiation environment. There is significant potential that this instrumentation could be capable of on-line analysis for process waste stream characterization at DOE sites

  16. Uncertainty and sensitivity analysis in performance assessment for the proposed high-level radioactive waste repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Helton, Jon C.; Hansen, Clifford W.; Sallaberry, Cédric J.

    2012-01-01

    Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, a detailed performance assessment (PA) for the YM repository was completed in 2008 and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository. The following aspects of the 2008 YM PA are described in this presentation: (i) conceptual structure and computational organization, (ii) uncertainty and sensitivity analysis techniques in use, (iii) uncertainty and sensitivity analysis for physical processes, and (iv) uncertainty and sensitivity analysis for expected dose to the reasonably maximally exposed individual (RMEI) specified the NRC’s regulations for the YM repository. - Highlights: ► An overview of performance assessment for the proposed Yucca Mountain radioactive waste repository is presented. ► Conceptual structure and computational organization are described. ► Uncertainty and sensitivity analysis techniques are described. ► Uncertainty and sensitivity analysis results for physical processes are presented. ► Uncertainty and sensitivity analysis results for expected dose are presented.

  17. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  18. Sensitivity analysis of recovery efficiency in high-temperature aquifer thermal energy storage with single well

    International Nuclear Information System (INIS)

    Jeon, Jun-Seo; Lee, Seung-Rae; Pasquinelli, Lisa; Fabricius, Ida Lykke

    2015-01-01

    High-temperature aquifer thermal energy storage system usually shows higher performance than other borehole thermal energy storage systems. Although there is a limitation in the widespread use of the HT-ATES system because of several technical problems such as clogging, corrosion, etc., it is getting more attention as these issues are gradually alleviated. In this study, a sensitivity analysis of recovery efficiency in two cases of HT-ATES system with a single well is conducted to select key parameters. For a fractional factorial design used to choose input parameters with uniformity, the optimal Latin hypercube sampling with an enhanced stochastic evolutionary algorithm is considered. Then, the recovery efficiency is obtained using a computer model developed by COMSOL Multiphysics. With input and output variables, the surrogate modeling technique, namely the Gaussian-Kriging method with Smoothly Clopped Absolute Deviation Penalty, is utilized. Finally, the sensitivity analysis is performed based on the variation decomposition. According to the result of sensitivity analysis, the most important input variables are selected and confirmed to consider the interaction effects for each case and it is confirmed that key parameters vary with the experiment domain of hydraulic and thermal properties as well as the number of input variables. - Highlights: • Main and interaction effects on recovery efficiency in HT-ATES was investigated. • Reliability depended on fractional factorial design and interaction effects. • Hydraulic permeability of aquifer had an important impact on recovery efficiency. • Site-specific sensitivity analysis of HT-ATES was recommended.

  19. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  20. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  1. Global Sensitivity Analysis of High Speed Shaft Subsystem of a Wind Turbine Drive Train

    Directory of Open Access Journals (Sweden)

    Saeed Asadi

    2018-01-01

    Full Text Available The wind turbine dynamics are complex and critical area of study for the wind industry. Quantification of the effective factors to wind turbine performance is valuable for making improvements to both power performance and turbine health. In this paper, the global sensitivity analysis of validated mathematical model for high speed shaft drive train test rig has been developed in order to evaluate the contribution of systems input parameters to the specified objective functions. The drive train in this study consists of a 3-phase induction motor, flexible shafts, shafts’ coupling, bearing housing, and disk with an eccentric mass. The governing equations were derived by using the Lagrangian formalism and were solved numerically by Newmark method. The variance based global sensitivity indices are introduced to evaluate the contribution of input structural parameters correlated to the objective functions. The conclusion from the current research provides informative beneficial data in terms of design and optimization of a drive train setup and also can provide better understanding of wind turbine drive train system dynamics with respect to different structural parameters, ultimately designing more efficient drive trains. Finally, the proposed global sensitivity analysis (GSA methodology demonstrates the detectability of faults in different components.

  2. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  3. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    International Nuclear Information System (INIS)

    Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C

    2015-01-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10 −2  nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)

  4. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  5. Development of the high-order decoupled direct method in three dimensions for particulate matter: enabling advanced sensitivity analysis in air quality models

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2012-03-01

    Full Text Available The high-order decoupled direct method in three dimensions for particulate matter (HDDM-3D/PM has been implemented in the Community Multiscale Air Quality (CMAQ model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity analysis of ISORROPIA, the inorganic aerosol module of CMAQ. A case-specific approach has been applied, and the sensitivities of activity coefficients and water content are explicitly computed. Stand-alone tests are performed for ISORROPIA by comparing the sensitivities (first- and second-order computed by HDDM and the brute force (BF approximations. Similar comparison has also been carried out for CMAQ sensitivities simulated using a week-long winter episode for a continental US domain. Second-order sensitivities of aerosol species (e.g., sulfate, nitrate, and ammonium with respect to domain-wide SO2, NOx, and NH3 emissions show agreement with BF results, yet exhibit less noise in locations where BF results are demonstrably inaccurate. Second-order sensitivity analysis elucidates poorly understood nonlinear responses of secondary inorganic aerosols to their precursors and competing species. Adding second-order sensitivity terms to the Taylor series projection of the nitrate concentrations with a 50% reduction in domain-wide NOx or SO2 emissions rates improves the prediction with statistical significance.

  6. Sensitivity analysis of recovery efficiency in high-temperature aquifer thermal energy storage with single well

    DEFF Research Database (Denmark)

    Jeon, Jun-Seo; Lee, Seung-Rae; Pasquinelli, Lisa

    2015-01-01

    ., it is getting more attention as these issues are gradually alleviated. In this study, a sensitivity analysis of recovery efficiency in two cases of HT-ATES system with a single well is conducted to select key parameters. For a fractional factorial design used to choose input parameters with uniformity...... with Smoothly Clopped Absolute Deviation Penalty, is utilized. Finally, the sensitivity analysis is performed based on the variation decomposition. According to the result of sensitivity analysis, the most important input variables are selected and confirmed to consider the interaction effects for each case...

  7. An UPLC-MS/MS method for highly sensitive high-throughput analysis of phytohormones in plant tissues

    Directory of Open Access Journals (Sweden)

    Balcke Gerd Ulrich

    2012-11-01

    Full Text Available Abstract Background Phytohormones are the key metabolites participating in the regulation of multiple functions of plant organism. Among them, jasmonates, as well as abscisic and salicylic acids are responsible for triggering and modulating plant reactions targeted against pathogens and herbivores, as well as resistance to abiotic stress (drought, UV-irradiation and mechanical wounding. These factors induce dramatic changes in phytohormone biosynthesis and transport leading to rapid local and systemic stress responses. Understanding of underlying mechanisms is of principle interest for scientists working in various areas of plant biology. However, highly sensitive, precise and high-throughput methods for quantification of these phytohormones in small samples of plant tissues are still missing. Results Here we present an LC-MS/MS method for fast and highly sensitive determination of jasmonates, abscisic and salicylic acids. A single-step sample preparation procedure based on mixed-mode solid phase extraction was efficiently combined with essential improvements in mobile phase composition yielding higher efficiency of chromatographic separation and MS-sensitivity. This strategy resulted in dramatic increase in overall sensitivity, allowing successful determination of phytohormones in small (less than 50 mg of fresh weight tissue samples. The method was completely validated in terms of analyte recovery, sensitivity, linearity and precision. Additionally, it was cross-validated with a well-established GC-MS-based procedure and its applicability to a variety of plant species and organs was verified. Conclusion The method can be applied for the analyses of target phytohormones in small tissue samples obtained from any plant species and/or plant part relying on any commercially available (even less sensitive tandem mass spectrometry instrumentation.

  8. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  9. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    Science.gov (United States)

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  10. Application of graphene for preconcentration and highly sensitive stripping voltammetric analysis of organophosphate pesticide

    Energy Technology Data Exchange (ETDEWEB)

    Wu Shuo, E-mail: wushuo@dlut.edu.cn [School of Chemistry, Dalian University of Technology, Dalian 116023 (China); Lan Xiaoqin; Cui Lijun; Zhang Lihui; Tao Shengyang; Wang Hainan; Han Mei; Liu Zhiguang; Meng Changgong [School of Chemistry, Dalian University of Technology, Dalian 116023 (China)

    2011-08-12

    Highlights: {yields} An electrochemical sensor is fabricated based on {beta}-CD dispersed graphene. {yields} The sensor could selectively detect organophosphate pesticide with high sensitivity. {yields} The {beta}-CD dispersed graphene owns large adsorption capacity for MP and superconductivity. {yields} The {beta}-CD dispersed graphene is superior to most of the porous sorbents ever known. - Abstract: Electrochemical reduced {beta}-cyclodextrin dispersed graphene ({beta}-CD-graphene) was developed as a sorbent for the preconcentration and electrochemical sensing of methyl parathion (MP), a representative nitroaromatic organophosphate pesticide with good redox activity. Benefited from the ultra-large surface area, large delocalized {pi}-electron system and the superconductivity of {beta}-CD-graphene, large amount of MP could be extracted on {beta}-CD-graphene modified electrode via strong {pi}-{pi} interaction and exhibited fast accumulation and electron transfer rate. Combined with differential pulse voltammetric analysis, the sensor shows ultra-high sensitivity, good selectivity and fast response. The limit of detection of 0.05 ppb is more than 10 times lower than those obtained from other sorbent based sensors. The method may open up a new possibility for the widespread use of electrochemical sensors for monitoring of ultra-trace OPs.

  11. Sensitivity Analysis of the Influence of Structural Parameters on Dynamic Behaviour of Highly Redundant Cable-Stayed Bridges

    Directory of Open Access Journals (Sweden)

    B. Asgari

    2013-01-01

    Full Text Available The model tuning through sensitivity analysis is a prominent procedure to assess the structural behavior and dynamic characteristics of cable-stayed bridges. Most of the previous sensitivity-based model tuning methods are automatic iterative processes; however, the results of recent studies show that the most reasonable results are achievable by applying the manual methods to update the analytical model of cable-stayed bridges. This paper presents a model updating algorithm for highly redundant cable-stayed bridges that can be used as an iterative manual procedure. The updating parameters are selected through the sensitivity analysis which helps to better understand the structural behavior of the bridge. The finite element model of Tatara Bridge is considered for the numerical studies. The results of the simulations indicate the efficiency and applicability of the presented manual tuning method for updating the finite element model of cable-stayed bridges. The new aspects regarding effective material and structural parameters and model tuning procedure presented in this paper will be useful for analyzing and model updating of cable-stayed bridges.

  12. A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils

    Directory of Open Access Journals (Sweden)

    Xiaodong Xie

    2014-05-01

    Full Text Available We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.

  13. Novel charge sensitive preamplifier without high-value feedback resistor

    International Nuclear Information System (INIS)

    Xi Deming

    1992-01-01

    A novel charge sensitive preamplifier is introduced. The method of removing the high value feedback resistor, the circuit design and analysis are described. A practical circuit and its measured performances are provided

  14. A highly stable and sensitive chemically modified screen-printed electrode for sulfide analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, D.-M. [Department of Chemistry, National Chung Hsing University, 250 Kuo-Kuang Road, Taichung 40217, Taiwan (China); Kumar, Annamalai Senthil [Department of Chemistry, National Chung Hsing University, 250 Kuo-Kuang Road, Taichung 40217, Taiwan (China); Zen, J.-M. [Department of Chemistry, National Chung Hsing University, 250 Kuo-Kuang Road, Taichung 40217, Taiwan (China)]. E-mail: jmzen@dragon.nchu.edu.tw

    2006-01-18

    We report here a highly stable and sensitive chemically modified screen-printed carbon electrode (CMSPE) for sulfide analysis. The CMSPE was prepared by first ion-exchanging ferricyanide into a Tosflex anion-exchange polymer and then sealing with a tetraethyl orthosilicate sol-gel layer. The sol-gel overlayer coating was crucial to stabilize the electron mediator (i.e., Fe(China){sub 6} {sup 3-}) from leaching. The strong interaction between the oxy-hydroxy functional group of sol-gel and the hydrophilic sites of Tosflex makes the composite highly rigid to trap the ferricyanide mediator. An obvious electrocatalytic sulfide oxidation current signal at {approx}0.20 V versus Ag/AgCl in pH 7 phosphate buffer solution was observed at the CMSPE. A linear calibration plot over a wide range of 0.1 {mu}M to 1 mM with a slope of 5.6 nA/{mu}M was obtained by flow injection analysis. The detection limit (S/N = 3) was 8.9 nM (i.e., 25.6 ppt). Practical utility of the system was applied to the determination of sulfide trapped from cigarette smoke and sulfide content in hot spring water.

  15. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  16. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  17. Highly sensitive detection using microring resonator and nanopores

    Science.gov (United States)

    Bougot-Robin, K.; Hoste, J. W.; Le Thomas, N.; Bienstman, P.; Edel, J. B.

    2016-04-01

    One of the most significant challenges facing physical and biological scientists is the accurate detection and identification of single molecules in free-solution environments. The ability to perform such sensitive and selective measurements opens new avenues for a large number of applications in biological, medical and chemical analysis, where small sample volumes and low analyte concentrations are the norm. Access to information at the single or few molecules scale is rendered possible by a fine combination of recent advances in technologies. We propose a novel detection method that combines highly sensitive label-free resonant sensing obtained with high-Q microcavities and position control in nanoscale pores (nanopores). In addition to be label-free and highly sensitive, our technique is immobilization free and does not rely on surface biochemistry to bind probes on a chip. This is a significant advantage, both in term of biology uncertainties and fewer biological preparation steps. Through combination of high-Q photonic structures with translocation through nanopore at the end of a pipette, or through a solid-state membrane, we believe significant advances can be achieved in the field of biosensing. Silicon microrings are highly advantageous in term of sensitivity, multiplexing, and microfabrication and are chosen for this study. In term of nanopores, we both consider nanopore at the end of a nanopipette, with the pore being approach from the pipette with nanoprecise mechanical control. Alternatively, solid state nanopores can be fabricated through a membrane, supporting the ring. Both configuration are discussed in this paper, in term of implementation and sensitivity.

  18. High sensitivity neutron activation analysis of environmental and biological standard reference materials

    International Nuclear Information System (INIS)

    Greenberg, R.R.; Fleming, R.F.; Zeisler, R.

    1984-01-01

    Neutron activation analysis is a sensitive method with unique capabilities for the analysis of environmental and biological samples. Since it is based upon the nuclear properties of the elements, it does not suffer from many of the chemical effects that plague other methods of analysis. Analyses can be performed either with no chemical treatment of the sample (instrumentally), or with separations of the elements of interest after neutron irradiation (radiochemically). Typical examples of both types of analysis are discussed, and data obtained for a number of environmental and biological SRMs are presented. (author)

  19. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  20. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  1. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  2. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  3. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  4. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  5. High-resolution, high-sensitivity NMR of nano-litre anisotropic samples by coil spinning

    Energy Technology Data Exchange (ETDEWEB)

    Sakellariou, D [CEA Saclay, DSM, DRECAM, SCM, Lab Struct and Dynam Resonance Magnet, CNRS URA 331, F-91191 Gif Sur Yvette, (France); Le Goff, G; Jacquinot, J F [CEA Saclay, DSM, DRECAM, SPEC: Serv Phys Etat Condense, CNRS URA 2464, F-91191 Gif Sur Yvette, (France)

    2007-07-01

    Nuclear magnetic resonance (NMR) can probe the local structure and dynamic properties of liquids and solids, making it one of the most powerful and versatile analytical methods available today. However, its intrinsically low sensitivity precludes NMR analysis of very small samples - as frequently used when studying isotopically labelled biological molecules or advanced materials, or as preferred when conducting high-throughput screening of biological samples or 'lab-on-a-chip' studies. The sensitivity of NMR has been improved by using static micro-coils, alternative detection schemes and pre-polarization approaches. But these strategies cannot be easily used in NMR experiments involving the fast sample spinning essential for obtaining well-resolved spectra from non-liquid samples. Here we demonstrate that inductive coupling allows wireless transmission of radio-frequency pulses and the reception of NMR signals under fast spinning of both detector coil and sample. This enables NMR measurements characterized by an optimal filling factor, very high radio-frequency field amplitudes and enhanced sensitivity that increases with decreasing sample volume. Signals obtained for nano-litre-sized samples of organic powders and biological tissue increase by almost one order of magnitude (or, equivalently, are acquired two orders of magnitude faster), compared to standard NMR measurements. Our approach also offers optimal sensitivity when studying samples that need to be confined inside multiple safety barriers, such as radioactive materials. In principle, the co-rotation of a micrometer-sized detector coil with the sample and the use of inductive coupling (techniques that are at the heart of our method) should enable highly sensitive NMR measurements on any mass-limited sample that requires fast mechanical rotation to obtain well-resolved spectra. The method is easy to implement on a commercial NMR set-up and exhibits improved performance with miniaturization, and we

  6. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  7. Estimate of the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model: a sensitivity analysis

    International Nuclear Information System (INIS)

    Guerrieri, A.

    2009-01-01

    In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it

  8. Analysis of Cyberbullying Sensitivity Levels of High School Students and Their Perceived Social Support Levels

    Science.gov (United States)

    Akturk, Ahmet Oguz

    2015-01-01

    Purpose: The purpose of this paper is to determine the cyberbullying sensitivity levels of high school students and their perceived social supports levels, and analyze the variables that predict cyberbullying sensitivity. In addition, whether cyberbullying sensitivity levels and social support levels differed according to gender was also…

  9. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  10. A high-sensitivity neutron counter and waste-drum counting with the high-sensitivity neutron instrument

    International Nuclear Information System (INIS)

    Hankins, D.E.; Thorngate, J.H.

    1993-04-01

    At Lawrence Livermore National Laboratory (LLNL), a highly sensitive neutron counter was developed that can detect and accurately measure the neutrons from small quantities of plutonium or from other low-level neutron sources. This neutron counter was originally designed to survey waste containers leaving the Plutonium Facility. However, it has proven to be useful in other research applications requiring a high-sensitivity neutron instrument

  11. High-Sensitivity GaN Microchemical Sensors

    Science.gov (United States)

    Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas

    2009-01-01

    Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.

  12. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  13. How the definition of acceptable antigens and epitope analysis can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.

    Science.gov (United States)

    Heidt, Sebastiaan; Haasnoot, Geert W; Claas, Frans H J

    2018-05-24

    Highly sensitized patients awaiting a renal transplant have a low chance of receiving an organ offer. Defining acceptable antigens and using this information for allocation purposes can vastly enhance transplantation of this subgroup of patients, which is the essence of the Eurotransplant Acceptable Mismatch program. Acceptable antigens can be determined by extensive laboratory testing, as well as on basis of human leukocyte antigen (HLA) epitope analyses. Within the Acceptable Mismatch program, there is no effect of HLA mismatches on long-term graft survival. Furthermore, patients transplanted through the Acceptable Mismatch program have similar long-term graft survival to nonsensitized patients transplanted through regular allocation. Although HLA epitope analysis is already being used for defining acceptable HLA antigens for highly sensitized patients in the Acceptable Mismatch program, increasing knowledge on HLA antibody - epitope interactions will pave the way toward the definition of acceptable epitopes for highly sensitized patients in the future. Allocation based on acceptable antigens can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.

  14. High-Sensitivity Spectrophotometry.

    Science.gov (United States)

    Harris, T. D.

    1982-01-01

    Selected high-sensitivity spectrophotometric methods are examined, and comparisons are made of their relative strengths and weaknesses and the circumstances for which each can best be applied. Methods include long path cells, noise reduction, laser intracavity absorption, thermocouple calorimetry, photoacoustic methods, and thermo-optical methods.…

  15. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  16. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  17. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  18. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    Science.gov (United States)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  19. High sensitivity optical molecular imaging system

    Science.gov (United States)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  20. Fiscal 2000 pioneering research on the research on high-sensitivity passive measurement/analysis technologies; 2000 nendo kokando passive keisoku bunseki gijutsu no chosa sendo kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The above-named research was brought over from the preceding fiscal year. Needs for passive measurement were investigated, and it was found that what are named below were interested in passive measurement. Wanting passive measurement technology were the analysis of organic matters on semiconductor wafers, analysis of dangerous substances in wastes, measurement of substances in the life space causing allergy to chemical substances, measurement of constituents of gas emitted by organisms for example through expiration, measurement for automatic sorting of plastic wastes, 2-dimensional spectrometry for medical treatment of organisms, and so forth. In the survey of seeds, various novel technologies were investigated in the fields of optical systems, sensors, and signal processing. The outcomes of the survey indicated that high-sensitivity measurement and analysis of spectral images, measurement and analysis of trace quantities in he fields of medical treatment, environmental matters, and semiconductors would be feasible by the use of newly developed technologies involving the interference array type 2-dimensional modulation/demodulation device, 2-dimensional high-sensitivity infrared sensor, high-sensitivity systematization technology, mixed signal separation technology capable of suppressing noise and background light, and technology for increasing processing speeds. (NEDO)

  1. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  2. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  3. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  4. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  5. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  6. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  7. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  8. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  9. High-speed high-sensitivity infrared spectroscopy using mid-infrared swept lasers (Conference Presentation)

    Science.gov (United States)

    Childs, David T. D.; Groom, Kristian M.; Hogg, Richard A.; Revin, Dmitry G.; Cockburn, John W.; Rehman, Ihtesham U.; Matcher, Stephen J.

    2016-03-01

    Infrared spectroscopy is a highly attractive read-out technology for compositional analysis of biomedical specimens because of its unique combination of high molecular sensitivity without the need for exogenous labels. Traditional techniques such as FTIR and Raman have suffered from comparatively low speed and sensitivity however recent innovations are challenging this situation. Direct mid-IR spectroscopy is being speeded up by innovations such as MEMS-based FTIR instruments with very high mirror speeds and supercontinuum sources producing very high sample irradiation levels. Here we explore another possible method - external cavity quantum cascade lasers (EC-QCL's) with high cavity tuning speeds (mid-IR swept lasers). Swept lasers have been heavily developed in the near-infrared where they are used for non-destructive low-coherence imaging (OCT). We adapt these concepts in two ways. Firstly by combining mid-IR quantum cascade gain chips with external cavity designs adapted from OCT we achieve spectral acquisition rates approaching 1 kHz and demonstrate potential to reach 100 kHz. Secondly we show that mid-IR swept lasers share a fundamental sensitivity advantage with near-IR OCT swept lasers. This makes them potentially able to achieve the same spectral SNR as an FTIR instrument in a time x N shorter (N being the number of spectral points) under otherwise matched conditions. This effect is demonstrated using measurements of a PDMS sample. The combination of potentially very high spectral acquisition rates, fundamental SNR advantage and the use of low-cost detector systems could make mid-IR swept lasers a powerful technology for high-throughput biomedical spectroscopy.

  10. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  11. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    Directory of Open Access Journals (Sweden)

    Karolina Chwialkowska

    2017-11-01

    Full Text Available Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq. We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation

  12. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  13. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  14. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  15. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  16. Highly Sensitive Optical Receivers

    CERN Document Server

    Schneider, Kerstin

    2006-01-01

    Highly Sensitive Optical Receivers primarily treats the circuit design of optical receivers with external photodiodes. Continuous-mode and burst-mode receivers are compared. The monograph first summarizes the basics of III/V photodetectors, transistor and noise models, bit-error rate, sensitivity and analog circuit design, thus enabling readers to understand the circuits described in the main part of the book. In order to cover the topic comprehensively, detailed descriptions of receivers for optical data communication in general and, in particular, optical burst-mode receivers in deep-sub-µm CMOS are presented. Numerous detailed and elaborate illustrations facilitate better understanding.

  17. High resolution, high sensitivity imaging and analysis of minerals and inclusions (fluid and melt) using the new CSIRO-GEMOC nuclear microprobe

    International Nuclear Information System (INIS)

    Ryan, C.G.; McInnes, B.M.; Van Achterbergh, E.; Williams, P.J.; Dong, G.; Zaw, K.

    1999-01-01

    Full text: The new CSIRO-GEMOC Nuclear Microprobe (NMP) The instrument was designed specifically for minerals analysis and imaging and to achieve ppm to sub-ppm sensitivity at a spatial resolution of 1-2 μm using X-rays and y-rays induced by MeV energy ion beams. The key feature of the design is a unique magnetic quadrupole quintuplet ion focussing system that combines high current with high spatial resolution (Ryan et al., 1999). These design goals have been achieved or exceeded. On the first day of operation, a spot-size of 1.3 μm was obtained at a beam current of 0.5 nA, suitable for fluid inclusion analysis and imaging. The spot-size grows to just 1.8 μm at 10 nA (3 MeV protons), ideal for mineralogical samples with detection limits down to 0.2 ppm achieved in quantitative, high resolution, trace element images. Applications of the NMP include: research into ore deposit processes through trace element geochemistry, mineralogy and fluid inclusion analysis of ancient deposits and active sea-floor environments, ore characterization, and fundamental studies of mantle processes and extraterrestrial material. Quantitative True Elemental Imaging Dynamic Analysis is a method for projecting quantitative major and trace element images from proton-induced X-ray emission (PIXE) data obtained using the NMP (Ryan et al., 1995). The method un-mixes full elemental spectral signatures to produce quantitative images that can be directly interrogated for the concentrations of all elements in selected areas or line projections, etc. Fluid Inclusion Analysis and Imaging The analysis of fluids trapped as fluid inclusions in minerals holds the key to understanding ore metal pathways and ore formation processes. PIXE analysis using the NMP provides a direct non-destructive method to determine the composition of these trapped fluids with detection limits down to 20 ppm. However, some PIXE results have been controversial, such as the strong partitioning of Cu into the vapour phase (e

  18. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  19. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  20. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    Science.gov (United States)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  1. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  2. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  3. Online high sensitivity measurement system for transuranic aerosols

    International Nuclear Information System (INIS)

    Kordas, J.F.; Phelps, P.L.

    1976-01-01

    A measurement system for transuranic aerosols has been designed that will be able to withstand the corrosive nature of stack effluents and yet have extremely high sensitivity. It will be capable of measuring 1 maximum permissible concentration (MPC) of plutonium or americium in 30 minutes with a fractional standard deviation of less than 0.33. Background resulting from 218 Po is eliminated by alpha energy discrimination and a decay scheme analysis. A microprocessor controls all data acquisition, data reduction, and instrument calibration

  4. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    Science.gov (United States)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  5. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  6. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  7. Compton imaging with a highly-segmented, position-sensitive HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Steinbach, T.; Hirsch, R.; Reiter, P.; Birkenbach, B.; Bruyneel, B.; Eberth, J.; Hess, H.; Lewandowski, L. [Universitaet zu Koeln, Institut fuer Kernphysik, Koeln (Germany); Gernhaeuser, R.; Maier, L.; Schlarb, M.; Weiler, B.; Winkel, M. [Technische Universitaet Muenchen, Physik Department, Garching (Germany)

    2017-02-15

    A Compton camera based on a highly-segmented high-purity germanium (HPGe) detector and a double-sided silicon-strip detector (DSSD) was developed, tested, and put into operation; the origin of γ radiation was determined successfully. The Compton camera is operated in two different modes. Coincidences from Compton-scattered γ-ray events between DSSD and HPGe detector allow for best angular resolution; while the high-efficiency mode takes advantage of the position sensitivity of the highly-segmented HPGe detector. In this mode the setup is sensitive to the whole 4π solid angle. The interaction-point positions in the 36-fold segmented large-volume HPGe detector are determined by pulse-shape analysis (PSA) of all HPGe detector signals. Imaging algorithms were developed for each mode and successfully implemented. The angular resolution sensitively depends on parameters such as geometry, selected multiplicity and interaction-point distances. Best results were obtained taking into account the crosstalk properties, the time alignment of the signals and the distance metric for the PSA for both operation modes. An angular resolution between 13.8 {sup circle} and 19.1 {sup circle}, depending on the minimal interaction-point distance for the high-efficiency mode at an energy of 1275 keV, was achieved. In the coincidence mode, an increased angular resolution of 4.6 {sup circle} was determined for the same γ-ray energy. (orig.)

  8. A survey of cross-section sensitivity analysis as applied to radiation shielding

    International Nuclear Information System (INIS)

    Goldstein, H.

    1977-01-01

    Cross section sensitivity studies revolve around finding the change in the value of an integral quantity, e.g. transmitted dose, for a given change in one of the cross sections. A review is given of the principal methodologies for obtaining the sensitivity profiles-principally direct calculations with altered cross sections, and linear perturbation theory. Some of the varied applications of cross section sensitivity analysis are described, including the practice, of questionable value, of adjusting input cross section data sets so as to provide agreement with integral experiments. Finally, a plea is made for using cross section sensitivity analysis as a powerful tool for analysing the transport mechanisms of particles in radiation shields and for constructing models of how cross section phenomena affect the transport. Cross section sensitivities in the shielding area have proved to be highly problem-dependent. Without the understanding afforded by such models, it is impossible to extrapolate the conclusions of cross section sensitivity analysis beyond the narrow limits of the specific situations examined in detail. Some of the elements that might be of use in developing the qualitative models are presented. (orig.) [de

  9. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  10. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  11. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  12. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  13. Sensitive rapid analysis of iodine-labelled protein mixture on flat substrates with high spatial resolution

    International Nuclear Information System (INIS)

    Zanevskij, Yu.V.; Ivanov, A.B.; Movchan, S.A.; Peshekhonov, V.D.; Chan Dyk Tkhan'; Chernenko, S.P.; Kaminir, L.B.; Krejndlin, Eh.Ya.; Chernyj, A.A.

    1983-01-01

    Usability of rapid analysis by electrophoresis of the admixture of I 125 -labelled proteins on flat samples by means of URAN type installation developed using a multiwire proportional chamber is studied. The sensitivity of the method is better than 200 cpm/cm 2 and the spatial resolution is approximately 1 mm. The procedure of the rapid analysis is no longer than several tens of minutes

  14. Parametric sensitivity analysis for biochemical reaction networks based on pathwise information theory.

    Science.gov (United States)

    Pantazis, Yannis; Katsoulakis, Markos A; Vlachos, Dionisios G

    2013-10-22

    Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as "pathwise". The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the FIM can allow to efficiently address

  15. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  16. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  17. What can we learn from global sensitivity analysis of biochemical systems?

    Science.gov (United States)

    Kent, Edward; Neumann, Stefan; Kummer, Ursula; Mendes, Pedro

    2013-01-01

    Most biological models of intermediate size, and probably all large models, need to cope with the fact that many of their parameter values are unknown. In addition, it may not be possible to identify these values unambiguously on the basis of experimental data. This poses the question how reliable predictions made using such models are. Sensitivity analysis is commonly used to measure the impact of each model parameter on its variables. However, the results of such analyses can be dependent on an exact set of parameter values due to nonlinearity. To mitigate this problem, global sensitivity analysis techniques are used to calculate parameter sensitivities in a wider parameter space. We applied global sensitivity analysis to a selection of five signalling and metabolic models, several of which incorporate experimentally well-determined parameters. Assuming these models represent physiological reality, we explored how the results could change under increasing amounts of parameter uncertainty. Our results show that parameter sensitivities calculated with the physiological parameter values are not necessarily the most frequently observed under random sampling, even in a small interval around the physiological values. Often multimodal distributions were observed. Unsurprisingly, the range of possible sensitivity coefficient values increased with the level of parameter uncertainty, though the amount of parameter uncertainty at which the pattern of control was able to change differed among the models analysed. We suggest that this level of uncertainty can be used as a global measure of model robustness. Finally a comparison of different global sensitivity analysis techniques shows that, if high-throughput computing resources are available, then random sampling may actually be the most suitable technique.

  18. Methodology of safety assessment and sensitivity analysis for geologic disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Kimura, Hideo; Takahashi, Tomoyuki; Shima, Shigeki; Matsuzuru, Hideo

    1995-01-01

    A deterministic safety assessment methodology has been developed to evaluate long-term radiological consequences associated with geologic disposal of high-level radioactive waste, and to demonstrate a generic feasibility of geologic disposal. An exposure scenario considered here is based on a normal evolution scenario which excludes events attributable to probabilistic alterations in the environment. A computer code system GSRW thus developed is based on a non site-specific model, and consists of a set of sub-modules for calculating the release of radionuclides from engineered barriers, the transport of radionuclides in and through the geosphere, the behavior of radionuclides in the biosphere, and radiation exposures of the public. In order to identify the important parameters of the assessment models, an automated procedure for sensitivity analysis based on the Differential Algebra method has been developed to apply to the GSRW. (author)

  19. Global sensitivity analysis of bogie dynamics with respect to suspension components

    Energy Technology Data Exchange (ETDEWEB)

    Mousavi Bideleh, Seyed Milad, E-mail: milad.mousavi@chalmers.se; Berbyuk, Viktor, E-mail: viktor.berbyuk@chalmers.se [Chalmers University of Technology, Department of Applied Mechanics (Sweden)

    2016-06-15

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  20. Global sensitivity analysis of bogie dynamics with respect to suspension components

    International Nuclear Information System (INIS)

    Mousavi Bideleh, Seyed Milad; Berbyuk, Viktor

    2016-01-01

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  1. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  2. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  3. Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2015-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...

  4. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  5. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  6. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  7. Methylation-Sensitive High Resolution Melting (MS-HRM).

    Science.gov (United States)

    Hussmann, Dianna; Hansen, Lise Lotte

    2018-01-01

    Methylation-Sensitive High Resolution Melting (MS-HRM) is an in-tube, PCR-based method to detect methylation levels at specific loci of interest. A unique primer design facilitates a high sensitivity of the assays enabling detection of down to 0.1-1% methylated alleles in an unmethylated background.Primers for MS-HRM assays are designed to be complementary to the methylated allele, and a specific annealing temperature enables these primers to anneal both to the methylated and the unmethylated alleles thereby increasing the sensitivity of the assays. Bisulfite treatment of the DNA prior to performing MS-HRM ensures a different base composition between methylated and unmethylated DNA, which is used to separate the resulting amplicons by high resolution melting.The high sensitivity of MS-HRM has proven useful for detecting cancer biomarkers in a noninvasive manner in urine from bladder cancer patients, in stool from colorectal cancer patients, and in buccal mucosa from breast cancer patients. MS-HRM is a fast method to diagnose imprinted diseases and to clinically validate results from whole-epigenome studies. The ability to detect few copies of methylated DNA makes MS-HRM a key player in the quest for establishing links between environmental exposure, epigenetic changes, and disease.

  8. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  9. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  10. Highly sensitive analysis of polycyclic aromatic hydrocarbons in environmental water with porous cellulose/zeolitic imidazolate framework-8 composite microspheres as a novel adsorbent coupled with high-performance liquid chromatography.

    Science.gov (United States)

    Liang, Xiaotong; Liu, Shengquan; Zhu, Rong; Xiao, Lixia; Yao, Shouzhuo

    2016-07-01

    In this work, novel cellulose/zeolitic imidazolate frameworks-8 composite microspheres have been successfully fabricated and utilized as sorbent for environmental polycyclic aromatic hydrocarbons efficient extraction and sensitive analysis. The composite microspheres were synthesized through the in situ hydrothermal growth of zeolitic imidazolate frameworks-8 on cellulose matrix, and exhibited favorable hierarchical structure with chemical composition as assumed through scanning electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction patterns, and Brunauer-Emmett-Teller surface areas characterization. A robust and highly efficient method was then successfully developed with as-prepared composite microspheres as novel solid-phase extraction sorbent with optimum extraction conditions, such as sorbent amount, sample volume, extraction time, desorption conditions, volume of organic modifier, and ionic strength. The method exhibited high sensitivity with low limit of detection down to 0.1-1.0 ng/L and satisfactory linearity with correlation coefficients ranging from 0.9988 to 0.9999, as well as good recoveries of 66.7-121.2% with relative standard deviations less than 10% for environmental polycyclic aromatic hydrocarbons analysis. Thus, our method was convenient and efficient for polycyclic aromatic hydrocarbons extraction and detection, potential for future environmental water samples analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  12. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  13. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  14. Sensitivity analysis on retardation effect of natural barriers against radionuclide transport

    International Nuclear Information System (INIS)

    Hatanaka, K.

    1994-01-01

    The generic performance assessment of the geological disposal system for high level waste (HLW) in Japan has been carried out by the Power Reactor and Nuclear Fuel Development Corporation (PNC) in accordance with the overall HLW management program defined by the Atomic Energy Commission of Japan. The Japanese concept of the geological disposal system is based on a multi-barrier system which is composed of vitrified waste, carbon steel overpack, thick bentonite buffer and a variety of realistic geological conditions. The main objectives of the study are the detailed analysis of the performance of engineered barrier system (EBS) and the analysis of the performance of natural barrier system (NBS) and the evaluation of its compliance with the required overall system performance. Sensitivity analysis was carried out for the objectives to investigate the way and extent of the retardation in the release to biosphere by the effect of NBS, and to clarify the conditions which is sufficient to ensure that the overall system meets safety requirement. The radionuclide transport model in geological media, the sensitivity analysis, and the calculated results of the retardation effect of NBS in terms of the sensitivity parameters are reported. (K.I.)

  15. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-01-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  16. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  17. Automated sensitivity analysis of the radionuclide migration code UCBNE10.2

    International Nuclear Information System (INIS)

    Pin, F.G.; Worley, B.A.; Oblow, E.M.; Wright, R.Q.; Harper, W.V.

    1985-01-01

    The Salt Repository Project (SRP) of the US Department of Energy is performing ongoing performance assessment analyses for the eventual licensing of an underground high-level nuclear waste repository in salt. As part of these studies, sensitivity and uncertainty analysis play a major role in the identification of important parameters, and in the identification of specific data needs for site characterization. Oak Ridge National Laboratory has supported the SRP in this effort resulting in the development of an automated procedure for performing large-scale sensitivity analysis using computer calculus. GRESS, Gradient Enhanced Software System, is a pre-compiler that can process FORTRAN computer codes and add derivative taking capabilities to the normal calculated results. The GRESS code is described and applied to the code UCB-NE-10.2 which simulates the migration through an adsorptive medium of the radionuclide members of a decay chain. Conclusions are drawn relative to the applicability of GRESS for more general large-scale modeling sensitivity studies, and the role of such techniques in the overall SRP sensitivity/uncertainty program is detailed. 6 refs., 2 figs., 3 tabs

  18. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  19. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    OpenAIRE

    Harry R. Millwater; R. Wesley Osborn

    2006-01-01

    A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel) and common damage mechanisms (inherent defects or su...

  20. High blood pressure and visual sensitivity

    Science.gov (United States)

    Eisner, Alvin; Samples, John R.

    2003-09-01

    The study had two main purposes: (1) to determine whether the foveal visual sensitivities of people treated for high blood pressure (vascular hypertension) differ from the sensitivities of people who have not been diagnosed with high blood pressure and (2) to understand how visual adaptation is related to standard measures of systemic cardiovascular function. Two groups of middle-aged subjects-hypertensive and normotensive-were examined with a series of test/background stimulus combinations. All subjects met rigorous inclusion criteria for excellent ocular health. Although the visual sensitivities of the two subject groups overlapped extensively, the age-related rate of sensitivity loss was, for some measures, greater for the hypertensive subjects, possibly because of adaptation differences between the two groups. Overall, the degree of steady-state sensitivity loss resulting from an increase of background illuminance (for 580-nm backgrounds) was slightly less for the hypertensive subjects. Among normotensive subjects, the ability of a bright (3.8-log-td), long-wavelength (640-nm) adapting background to selectively suppress the flicker response of long-wavelength-sensitive (LWS) cones was related inversely to the ratio of mean arterial blood pressure to heart rate. The degree of selective suppression was also related to heart rate alone, and there was evidence that short-term changes of cardiovascular response were important. The results suggest that (1) vascular hypertension, or possibly its treatment, subtly affects visual function even in the absence of eye disease and (2) changes in blood flow affect retinal light-adaptation processes involved in the selective suppression of the flicker response from LWS cones caused by bright, long-wavelength backgrounds.

  1. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  2. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  3. Highly sensitive high resolution Raman spectroscopy using resonant ionization methods

    International Nuclear Information System (INIS)

    Owyoung, A.; Esherick, P.

    1984-05-01

    In recent years, the introduction of stimulated Raman methods has offered orders of magnitude improvement in spectral resolving power for gas phase Raman studies. Nevertheless, the inherent weakness of the Raman process suggests the need for significantly more sensitive techniques in Raman spectroscopy. In this we describe a new approach to this problem. Our new technique, which we call ionization-detected stimulated Raman spectroscopy (IDSRS), combines high-resolution SRS with highly-sensitive resonant laser ionization to achieve an increase in sensitivity of over three orders of magnitude. The excitation/detection process involves three sequential steps: (1) population of a vibrationally excited state via stimulated Raman pumping; (2) selective ionization of the vibrationally excited molecule with a tunable uv source; and (3) collection of the ionized species at biased electrodes where they are detected as current in an external circuit

  4. Best estimate analysis of LOFT L2-5 with CATHARE: uncertainty and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    JOUCLA, Jerome; PROBST, Pierre [Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); FOUET, Fabrice [APTUS, Versailles (France)

    2008-07-01

    The revision of the 10 CFR50.46 in 1988 has made possible the use of best-estimate codes. They may be used in safety demonstration and licensing, provided that uncertainties are added to the relevant output parameters before comparing them with the acceptance criteria. In the safety analysis of the large break loss of coolant accident, it was agreed that the 95. percentile estimated with a high degree of confidence should be lower than the acceptance criteria. It appeared necessary to IRSN, technical support of the French Safety Authority, to get more insight into these strategies which are being developed not only in thermal-hydraulics but in other fields such as in neutronics. To estimate the 95. percentile with a high confidence level, we propose to use rank statistics or bootstrap. Toward the objective of assessing uncertainty, it is useful to determine and to classify the main input parameters. We suggest approximating the code by a surrogate model, the Kriging model, which will be used to make a sensitivity analysis with the SOBOL methodology. This paper presents the application of two new methodologies of how to make the uncertainty and sensitivity analysis on the maximum peak cladding temperature of the LOFT L2-5 test with the CATHARE code. (authors)

  5. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  6. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  7. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  8. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  9. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  10. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  11. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  12. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  13. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    (Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...

  14. Systemization of burnup sensitivity analysis code (2) (Contract research)

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2008-08-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion

  15. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  16. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    Science.gov (United States)

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Retinal sensitivity and choroidal thickness in high myopia.

    Science.gov (United States)

    Zaben, Ahmad; Zapata, Miguel Á; Garcia-Arumi, Jose

    2015-03-01

    To estimate the association between choroidal thickness in the macular area and retinal sensitivity in eyes with high myopia. This investigation was a transversal study of patients with high myopia, all of whom had their retinal sensitivity measured with macular integrity assessment microperimetry. The choroidal thicknesses in the macular area were then measured by optical coherence tomography, and statistical correlations between their functionality and the anatomical structuralism, as assessed by both types of measurements, were analyzed. Ninety-six eyes from 77 patients with high myopia were studied. The patients had a mean age ± standard deviation of 38.9 ± 13.2 years, with spherical equivalent values ranging from -6.00 diopter to -20.00 diopter (8.74 ± 2.73 diopter). The mean central choroidal thickness was 159.00 ± 50.57. The mean choroidal thickness was directly correlated with sensitivity (r = 0.306; P = 0.004) and visual acuity but indirectly correlated with the spherical equivalent values and patient age. The mean sensitivity was not significantly correlated with the macular foveal thickness (r = -0.174; P = 0.101) or with the overall macular thickness (r = 0.103; P = 0.334); furthermore, the mean sensitivity was significantly correlated with visual acuity (r = 0.431; P < 0.001) and the spherical equivalent values (r = -0.306; P = 0.003). Retinal sensitivity in highly myopic eyes is directly correlated with choroidal thickness and does not seem to be associated with retinal thickness. Thus, in patients with high myopia, accurate measurements of choroidal thickness may provide more accurate information about this pathologic condition because choroidal thickness correlates to a greater degree with the functional parameters, patient age, and spherical equivalent values.

  18. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  19. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  20. Mesoporous structured MIPs@CDs fluorescence sensor for highly sensitive detection of TNT.

    Science.gov (United States)

    Xu, Shoufang; Lu, Hongzhi

    2016-11-15

    A facile strategy was developed to prepare mesoporous structured molecularly imprinted polymers capped carbon dots (M-MIPs@CDs) fluorescence sensor for highly sensitive and selective determination of TNT. The strategy using amino-CDs directly as "functional monomer" for imprinting simplify the imprinting process and provide well recognition sites accessibility. The as-prepared M-MIPs@CDs sensor, using periodic mesoporous silica as imprinting matrix, and amino-CDs directly as "functional monomer", exhibited excellent selectivity and sensitivity toward TNT with detection limit of 17nM. The recycling process was sustainable for 10 times without obvious efficiency decrease. The feasibility of the developed method in real samples was successfully evaluated through the analysis of TNT in soil and water samples with satisfactory recoveries of 88.6-95.7%. The method proposed in this work was proved to be a convenient and practical way to prepare high sensitive and selective fluorescence MIPs@CDs sensors. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  2. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  3. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  4. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Directory of Open Access Journals (Sweden)

    Gafurov Andrey

    2018-01-01

    Full Text Available The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the “Project analysis scenario” flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  5. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Science.gov (United States)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  6. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  7. High pressure-sensitive gene expression in Lactobacillus sanfranciscensis

    Directory of Open Access Journals (Sweden)

    R.F. Vogel

    2005-08-01

    Full Text Available Lactobacillus sanfranciscensis is a Gram-positive lactic acid bacterium used in food biotechnology. It is necessary to investigate many aspects of a model organism to elucidate mechanisms of stress response, to facilitate preparation, application and performance in food fermentation, to understand mechanisms of inactivation, and to identify novel tools for high pressure biotechnology. To investigate the mechanisms of the complex bacterial response to high pressure we have analyzed changes in the proteome and transcriptome by 2-D electrophoresis, and by microarrays and real time PCR, respectively. More than 16 proteins were found to be differentially expressed upon high pressure stress and were compared to those sensitive to other stresses. Except for one apparently high pressure-specific stress protein, no pressure-specific stress proteins were found, and the proteome response to pressure was found to differ from that induced by other stresses. Selected pressure-sensitive proteins were partially sequenced and their genes were identified by reverse genetics. In a transcriptome analysis of a redundancy cleared shot gun library, about 7% of the genes investigated were found to be affected. Most of them appeared to be up-regulated 2- to 4-fold and these results were confirmed by real time PCR. Gene induction was shown for some genes up-regulated at the proteome level (clpL/groEL/rbsK, while the response of others to high hydrostatic pressure at the transcriptome level seemed to differ from that observed at the proteome level. The up-regulation of selected genes supports the view that the cell tries to compensate for pressure-induced impairment of translation and membrane transport.

  8. Scanning Auger microscopy for high lateral and depth elemental sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, E., E-mail: eugenie.martinez@cea.fr [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Yadav, P. [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Bouttemy, M. [Institut Lavoisier de Versailles, 45 av. des Etats-Unis, 78035 Versailles Cedex (France); Renault, O.; Borowik, Ł.; Bertin, F. [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Etcheberry, A. [Institut Lavoisier de Versailles, 45 av. des Etats-Unis, 78035 Versailles Cedex (France); Chabli, A. [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France)

    2013-12-15

    Highlights: •SAM performances and limitations are illustrated on real practical cases such as the analysis of nanowires and nanodots. •High spatial elemental resolution is shown with the analysis of reference semiconducting Al{sub 0.7}Ga{sub 0.3}As/GaAs multilayers. •High in-depth elemental resolution is also illustrated. Auger depth profiling with low energy ion beams allows revealing ultra-thin layers (∼1 nm). •Analysis of cross-sectional samples is another effective approach to obtain in-depth elemental information. -- Abstract: Scanning Auger microscopy is currently gaining interest for investigating nanostructures or thin multilayers stacks developed for nanotechnologies. New generation Auger nanoprobes combine high lateral (∼10 nm), energy (0.1%) and depth (∼2 nm) resolutions thus offering the possibility to analyze the elemental composition as well as the chemical state, at the nanometre scale. We report here on the performances and limitations on practical examples from nanotechnology research. The spatial elemental sensitivity is illustrated with the analysis of Al{sub 0.7}Ga{sub 0.3}As/GaAs heterostructures, Si nanowires and SiC nanodots. Regarding the elemental in-depth composition, two effective approaches are presented: low energy depth profiling to reveal ultra-thin layers (∼1 nm) and analysis of cross-sectional samples.

  9. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    Science.gov (United States)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  10. Laser-engraved carbon nanotube paper for instilling high sensitivity, high stretchability, and high linearity in strain sensors

    KAUST Repository

    Xin, Yangyang

    2017-06-29

    There is an increasing demand for strain sensors with high sensitivity and high stretchability for new applications such as robotics or wearable electronics. However, for the available technologies, the sensitivity of the sensors varies widely. These sensors are also highly nonlinear, making reliable measurement challenging. Here we introduce a new family of sensors composed of a laser-engraved carbon nanotube paper embedded in an elastomer. A roll-to-roll pressing of these sensors activates a pre-defined fragmentation process, which results in a well-controlled, fragmented microstructure. Such sensors are reproducible and durable and can attain ultrahigh sensitivity and high stretchability (with a gauge factor of over 4.2 × 10(4) at 150% strain). Moreover, they can attain high linearity from 0% to 15% and from 22% to 150% strain. They are good candidates for stretchable electronic applications that require high sensitivity and linearity at large strains.

  11. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  12. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  13. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  14. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  15. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  16. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  17. Performances of non-parametric statistics in sensitivity analysis and parameter ranking

    International Nuclear Information System (INIS)

    Saltelli, A.

    1987-01-01

    Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis

  18. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  19. Development of the "Highly Sensitive Dog" questionnaire to evaluate the personality dimension "Sensory Processing Sensitivity" in dogs.

    Directory of Open Access Journals (Sweden)

    Maya Braem

    Full Text Available In humans, the personality dimension 'sensory processing sensitivity (SPS', also referred to as "high sensitivity", involves deeper processing of sensory information, which can be associated with physiological and behavioral overarousal. However, it has not been studied up to now whether this dimension also exists in other species. SPS can influence how people perceive the environment and how this affects them, thus a similar dimension in animals would be highly relevant with respect to animal welfare. We therefore explored whether SPS translates to dogs, one of the primary model species in personality research. A 32-item questionnaire to assess the "highly sensitive dog score" (HSD-s was developed based on the "highly sensitive person" (HSP questionnaire. A large-scale, international online survey was conducted, including the HSD questionnaire, as well as questions on fearfulness, neuroticism, "demographic" (e.g. dog sex, age, weight; age at adoption, etc. and "human" factors (e.g. owner age, sex, profession, communication style, etc., and the HSP questionnaire. Data were analyzed using linear mixed effect models with forward stepwise selection to test prediction of HSD-s by the above-mentioned factors, with country of residence and dog breed treated as random effects. A total of 3647 questionnaires were fully completed. HSD-, fearfulness, neuroticism and HSP-scores showed good internal consistencies, and HSD-s only moderately correlated with fearfulness and neuroticism scores, paralleling previous findings in humans. Intra- (N = 447 and inter-rater (N = 120 reliabilities were good. Demographic and human factors, including HSP score, explained only a small amount of the variance of HSD-s. A PCA analysis identified three subtraits of SPS, comparable to human findings. Overall, the measured personality dimension in dogs showed good internal consistency, partial independence from fearfulness and neuroticism, and good intra- and inter

  20. SENSITIVITY ANALYSIS OF BIOME-BGC MODEL FOR DRY TROPICAL FORESTS OF VINDHYAN HIGHLANDS, INDIA

    OpenAIRE

    M. Kumar; A. S. Raghubanshi

    2012-01-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to...

  1. Development of High Temperature/High Sensitivity Novel Chemical Resistive Sensor

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Chunrui [Univ. of Texas, San Antonio, TX (United States); Enriquez, Erik [Univ. of Texas, San Antonio, TX (United States); Wang, Haibing [Univ. of Texas, San Antonio, TX (United States); Xu, Xing [Univ. of Texas, San Antonio, TX (United States); Bao, Shangyong [Univ. of Texas, San Antonio, TX (United States); Collins, Gregory [Univ. of Texas, San Antonio, TX (United States)

    2013-08-13

    The research has been focused to design, fabricate, and develop high temperature/high sensitivity novel multifunctional chemical sensors for the selective detection of fossil energy gases used in power and fuel systems. By systematically studying the physical properties of the LnBaCo2O5+d (LBCO) [Ln=Pr or La] thin-films, a new concept chemical sensor based high temperature chemical resistant change has been developed for the application for the next generation highly efficient and near zero emission power generation technologies. We also discovered that the superfast chemical dynamic behavior and an ultrafast surface exchange kinetics in the highly epitaxial LBCO thin films. Furthermore, our research indicates that hydrogen can superfast diffuse in the ordered oxygen vacancy structures in the highly epitaxial LBCO thin films, which suggest that the LBCO thin film not only can be an excellent candidate for the fabrication of high temperature ultra sensitive chemical sensors and control systems for power and fuel monitoring systems, but also can be an excellent candidate for the low temperature solid oxide fuel cell anode and cathode materials.

  2. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  3. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  4. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  5. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  6. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  7. Folded cladding porous shaped photonic crystal fiber with high sensitivity in optical sensing applications: Design and analysis

    Directory of Open Access Journals (Sweden)

    Bikash Kumar Paul

    2017-02-01

    Full Text Available A micro structure folded cladding porous shaped with circular air hole photonic crystal fiber (FP-PCF is proposed and numerically investigated in a broader wavelength range from 1.4 µm to 1.64 µm (E+S+C+L+U for chemical sensing purposes. Employing finite element method (FEM with anisotropic perfectly matched layer (PML various properties of the proposed FP-PCF are numerically inquired. Filling the hole of core with aqueous analyte ethanol (n = 1.354 and tuning different geometric parameters of the fiber, the sensitivity order of 64.19% and the confinement loss of 2.07 × 10-5 dB/m are attained at 1.48 µm wavelength in S band. The investigated numerical simulation result strongly focuses on sensing purposes; because this fiber attained higher sensitivity with lower confinement loss over the operating wavelength. Measuring time of sensitivity, simultaneously confinement loss also inquired. It reflects that confinement loss is highly dependable on PML depth but not for sensitivity. Beside above properties numerical aperture (NA, nonlinearity, and effective area are also computed. This FP-PCF also performed as sensor for other alcohol series (methanol, propanol, butanol, pentanol. Optimized FP-PCF shows higher sensitivity and low confinement loss carrying high impact in the area of chemical as well as gas sensing purposes. Surely it is clear that install such type of sensor will flourish technology massively.         Keywords: Confinement loss, Effective area, Index guiding FP-PCF, Numerical aperture, Nonlinear coefficient, Sensitivity

  8. Highly Sensitive and Very Stretchable Strain Sensor Based on a Rubbery Semiconductor.

    Science.gov (United States)

    Kim, Hae-Jin; Thukral, Anish; Yu, Cunjiang

    2018-02-07

    There is a growing interest in developing stretchable strain sensors to quantify the large mechanical deformation and strain associated with the activities for a wide range of species, such as humans, machines, and robots. Here, we report a novel stretchable strain sensor entirely in a rubber format by using a solution-processed rubbery semiconductor as the sensing material to achieve high sensitivity, large mechanical strain tolerance, and hysteresis-less and highly linear responses. Specifically, the rubbery semiconductor exploits π-π stacked poly(3-hexylthiophene-2,5-diyl) nanofibrils (P3HT-NFs) percolated in silicone elastomer of poly(dimethylsiloxane) to yield semiconducting nanocomposite with a large mechanical stretchability, although P3HT is a well-known nonstretchable semiconductor. The fabricated strain sensors exhibit reliable and reversible sensing capability, high gauge factor (gauge factor = 32), high linearity (R 2 > 0.996), and low hysteresis (degree of hysteresis wearable smart gloves. Systematic investigations in the materials design and synthesis, sensor fabrication and characterization, and mechanical analysis reveal the key fundamental and application aspects of the highly sensitive and very stretchable strain sensors entirely from rubbers.

  9. Highly sensitive multianalyte immunochromatographic test strip for rapid chemiluminescent detection of ractopamine and salbutamol

    International Nuclear Information System (INIS)

    Gao, Hongfei; Han, Jing; Yang, Shijia; Wang, Zhenxing; Wang, Lin; Fu, Zhifeng

    2014-01-01

    Graphical abstract: A multianalyte immunochromatographic test strip was developed for the rapid detection of two β 2 -agonists. Due to the application of chemiluminescent detection, this quantitative method shows much higher sensitivity. - Highlights: • An immunochromatographic test strip was developed for detection of multiple β 2 -agonists. • The whole assay process can be completed within 20 min. • The proposed method shows much higher sensitivity due to the application of CL detection. • It is a portable analytical tool suitable for field analysis and rapid screening. - Abstract: A novel immunochromatographic assay (ICA) was proposed for rapid and multiple assay of β 2 -agonists, by utilizing ractopamine (RAC) and salbutamol (SAL) as the models. Owing to the introduction of chemiluminescent (CL) approach, the proposed protocol shows much higher sensitivity. In this work, the described ICA was based on a competitive format, and horseradish peroxidase-tagged antibodies were used as highly sensitive CL probes. Quantitative analysis of β 2 -agonists was achieved by recording the CL signals of the probes captured on the two test zones of the nitrocellulose membrane. Under the optimum conditions, RAC and SAL could be detected within the linear ranges of 0.50–40 and 0.10–50 ng mL −1 , with the detection limits of 0.20 and 0.040 ng mL −1 (S/N = 3), respectively. The whole process for multianalyte immunoassay of RAC and SAL can be completed within 20 min. Furthermore, the test strip was validated with spiked swine urine samples and the results showed that this method was reliable in measuring β 2 -agonists in swine urine. This CL-based multianalyte test strip shows a series of advantages such as high sensitivity, ideal selectivity, simple manipulation, high assay efficiency and low cost. Thus, it opens up new pathway for rapid screening and field analysis, and shows a promising prospect in food safety

  10. Highly sensitive multianalyte immunochromatographic test strip for rapid chemiluminescent detection of ractopamine and salbutamol

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Hongfei; Han, Jing; Yang, Shijia; Wang, Zhenxing; Wang, Lin; Fu, Zhifeng, E-mail: fuzf@swu.edu.cn

    2014-08-11

    Graphical abstract: A multianalyte immunochromatographic test strip was developed for the rapid detection of two β{sub 2}-agonists. Due to the application of chemiluminescent detection, this quantitative method shows much higher sensitivity. - Highlights: • An immunochromatographic test strip was developed for detection of multiple β{sub 2}-agonists. • The whole assay process can be completed within 20 min. • The proposed method shows much higher sensitivity due to the application of CL detection. • It is a portable analytical tool suitable for field analysis and rapid screening. - Abstract: A novel immunochromatographic assay (ICA) was proposed for rapid and multiple assay of β{sub 2}-agonists, by utilizing ractopamine (RAC) and salbutamol (SAL) as the models. Owing to the introduction of chemiluminescent (CL) approach, the proposed protocol shows much higher sensitivity. In this work, the described ICA was based on a competitive format, and horseradish peroxidase-tagged antibodies were used as highly sensitive CL probes. Quantitative analysis of β{sub 2}-agonists was achieved by recording the CL signals of the probes captured on the two test zones of the nitrocellulose membrane. Under the optimum conditions, RAC and SAL could be detected within the linear ranges of 0.50–40 and 0.10–50 ng mL{sup −1}, with the detection limits of 0.20 and 0.040 ng mL{sup −1} (S/N = 3), respectively. The whole process for multianalyte immunoassay of RAC and SAL can be completed within 20 min. Furthermore, the test strip was validated with spiked swine urine samples and the results showed that this method was reliable in measuring β{sub 2}-agonists in swine urine. This CL-based multianalyte test strip shows a series of advantages such as high sensitivity, ideal selectivity, simple manipulation, high assay efficiency and low cost. Thus, it opens up new pathway for rapid screening and field analysis, and shows a promising prospect in food safety.

  11. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  12. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  13. Application of adjoint sensitivity analysis to nuclear reactor fuel rod performance

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Was, G.S.

    1984-01-01

    Adjoint sensitivity analysis in nuclear fuel behavior modeling is extended to operate on the entire power history for both Zircaloy and stainless steel cladding via the computer codes FCODE-ALPHA/SS and SCODE/SS. The sensitivities of key variables to input parameters are found to be highly non-intuitive and strongly dependent on the fuel-clad gap status and the history of the fuel during the cycle. The sensitivities of five key variables, clad circumferential stress and strain, fission gas release, fuel centerline temperature and fuel-clad gap, to eleven input parameters are studied. The most important input parameters (yielding significances between 1 and 100) are fabricated clad inner and outer radii and fuel radius. The least important significances (less than 0.01) are the time since reactor start-up and fuel-burnup densification rate. Intermediate to these are fabricated fuel porosity, linear heat generation rate, the power history scale factor, clad outer temperature, fill gas pressure and coolant pressure. Stainless steel and Zircaloy have similar sensitivities at start-up but these diverges a burnup proceeds due to the effect of the higher creep rate of Zircaloy which causes the system to be more responsive to changes in input parameters. The value of adjoint sensitivity analysis lies in its capability of uncovering dependencies of fuel variables on input parameters that cannot be determined by a sequential thought process. (orig.)

  14. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  15. Some aspects of ICP-AES analysis of high purity rare earths

    International Nuclear Information System (INIS)

    Murty, P.S.; Biswas, S.S.

    1991-01-01

    Inductively coupled plasma atomic emission spectrometry (ICP-AES) is a technique capable of giving high sensitivity in trace elemental analysis. While the technique possesses high sensitivity, it lacks high selectivity. Selectivity is important where substances emitting complex spectra are to be analysed for trace elements. Rare earths emit highly complex spectra in a plasma source and the determination of adjacent rare earths in a high purity rare earth matrix, with high sensitivity, is not possible due to the inadequate selectivity of ICP-AES. One approach that has yielded reasonably good spectral selectivity in the high purity rare earth analysis by ICP-AES is by employing a combination of wavelength modulation techniques and high resolution echelle grating. However, it was found that by using a high resolution monochromator senstitivities either comparable to or better than those reported by the wavelength modulation technique could be obtained. (author). 2 refs., 2 figs., 2 tabs

  16. Design of a Piezoelectric Accelerometer with High Sensitivity and Low Transverse Effect

    Directory of Open Access Journals (Sweden)

    Bian Tian

    2016-09-01

    Full Text Available In order to meet the requirements of cable fault detection, a new structure of piezoelectric accelerometer was designed and analyzed in detail. The structure was composed of a seismic mass, two sensitive beams, and two added beams. Then, simulations including the maximum stress, natural frequency, and output voltage were carried out. Moreover, comparisons with traditional structures of piezoelectric accelerometer were made. To verify which vibration mode is the dominant one on the acceleration and the space between the mass and glass, mode analysis and deflection analysis were carried out. Fabricated on an n-type single crystal silicon wafer, the sensor chips were wire-bonged to printed circuit boards (PCBs and simply packaged for experiments. Finally, a vibration test was conducted. The results show that the proposed piezoelectric accelerometer has high sensitivity, low resonance frequency, and low transverse effect.

  17. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  18. Local sensitivity analysis for inverse problems solved by singular value decomposition

    Science.gov (United States)

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  19. Evaluation of treatment effects for high-performance dye-sensitized solar cells using equivalent circuit analysis

    International Nuclear Information System (INIS)

    Murayama, Masaki; Mori, Tatsuo

    2006-01-01

    Equivalent circuit analysis using a one-diode model was carried out as a simpler, more convenient method to evaluate the electric mechanism and to employ effective treatment of a dye-sensitized solar cell (DSC). Cells treated using acetic acid or 4,t-butylpyridine were measured under irradiation (0.1 W/m 2 , AM 1.5) to obtain current-voltage (I-V) curves. Cell performance and equivalent circuit parameters were calculated from the I-V curves. Evaluation based on residual factors was useful for better fitting of the equivalent circuit to the I-V curve. The diode factor value was often over two for high-performance DSCs. Acetic acid treatment was effective to increase the short-circuit current by decreasing the series resistance of cells. In contrast, 4,t-butylpyridine was effective to increase open-circuit voltage by increasing the cell shunt resistance. Previous explanations considered that acetic acid worked to decrease the internal resistance of the TiO 2 layer and butylpyridine worked to lower the back-electron-transfer from the TiO 2 to the electrolyte

  20. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  1. Sensitivity and Interaction Analysis Based on Sobol’ Method and Its Application in a Distributed Flood Forecasting Model

    Directory of Open Access Journals (Sweden)

    Hui Wan

    2015-06-01

    Full Text Available Sensitivity analysis is a fundamental approach to identify the most significant and sensitive parameters, helping us to understand complex hydrological models, particularly for time-consuming distributed flood forecasting models based on complicated theory with numerous parameters. Based on Sobol’ method, this study compared the sensitivity and interactions of distributed flood forecasting model parameters with and without accounting for correlation. Four objective functions: (1 Nash–Sutcliffe efficiency (ENS; (2 water balance coefficient (WB; (3 peak discharge efficiency (EP; and (4 time to peak efficiency (ETP were implemented to the Liuxihe model with hourly rainfall-runoff data collected in the Nanhua Creek catchment, Pearl River, China. Contrastive results for the sensitivity and interaction analysis were also illustrated among small, medium, and large flood magnitudes. Results demonstrated that the choice of objective functions had no effect on the sensitivity classification, while it had great influence on the sensitivity ranking for both uncorrelated and correlated cases. The Liuxihe model behaved and responded uniquely to various flood conditions. The results also indicated that the pairwise parameters interactions revealed a non-ignorable contribution to the model output variance. Parameters with high first or total order sensitivity indices presented a corresponding high second order sensitivity indices and correlation coefficients with other parameters. Without considering parameter correlations, the variance contributions of highly sensitive parameters might be underestimated and those of normally sensitive parameters might be overestimated. This research laid a basic foundation to improve the understanding of complex model behavior.

  2. Overview of methods for uncertainty analysis and sensitivity analysis in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.; Helton, J.C.

    1985-01-01

    Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed

  3. New strategies of sensitivity analysis capabilities in continuous-energy Monte Carlo code RMC

    International Nuclear Information System (INIS)

    Qiu, Yishu; Liang, Jingang; Wang, Kan; Yu, Jiankai

    2015-01-01

    three strategies employed by RMC maintain high parallel efficiency of approximately 96–98% within the observed 600 processors, and the memory requirements per processor decrease almost linearly as the number of processors increases from 120 to 600. To conclude, RMC is capable of performing sensitivity analysis with sufficient accuracy and high efficiency

  4. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili, E-mail: wangnsrl@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029 (China); Zhang, Kai [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Zhu, Peiping; Wu, Ziyu, E-mail: wuzy@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China and Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.

  5. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili; Zhang, Kai; Zhu, Peiping; Wu, Ziyu

    2015-01-01

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations

  6. High-Sensitivity Temperature-Independent Silicon Photonic Microfluidic Biosensors

    Science.gov (United States)

    Kim, Kangbaek

    Optical biosensors that can precisely quantify the presence of specific molecular species in real time without the need for labeling have seen increased use in the drug discovery industry and molecular biology in general. Of the many possible optical biosensors, the TM mode Si biosensor is shown to be very attractive in the sensing application because of large field amplitude on the surface and cost effective CMOS VLSI fabrication. Noise is the most fundamental factor that limits the performance of sensors in development of high-sensitivity biosensors, and noise reduction techniques require precise studies and analysis. One such example stems from thermal fluctuations. Generally SOI biosensors are vulnerable to ambient temperature fluctuations because of large thermo-optic coefficient of silicon (˜2x10 -4 RIU/K), typically requiring another reference ring and readout sequence to compensate temperature induced noise. To address this problem, we designed sensors with a novel TM-mode shallow-ridge waveguide that provides both large surface amplitude for bulk and surface sensing. With proper design, this also provides large optical confinement in the aqueous cladding that renders the device athermal using the negative thermo-optic coefficient of water (~ --1x10-4RIU/K), demonstrating cancellation of thermo-optic effects for aqueous solution operation near 300K. Additional limitations resulting from mechanical actuator fluctuations, stability of tunable lasers, and large 1/f noise of lasers and sensor electronics can limit biosensor performance. Here we also present a simple harmonic feedback readout technique that obviates the need for spectrometers and tunable lasers. This feedback technique reduces the impact of 1/f noise to enable high-sensitivity, and a DSP lock-in with 256 kHz sampling rate can provide down to micros time scale monitoring for fast transitions in biomolecular concentration with potential for small volume and low cost. In this dissertation, a novel

  7. The development of a high performance liquid chromatograph with a sensitive on-stream radioactivity monitor for the analysis of 3H- and 14C-labelled gibberellins

    International Nuclear Information System (INIS)

    Reeve, D.R.; Yokota, T.; Nash, L.; Crozier, A.

    1976-01-01

    The development of a high performance liquid chromatograph for the separation of gibberellins is described. The system combines high efficiency, peak capacity, and sample capacity with rapid speed of analysis. In addition, the construction details of a sensitive on-stream radioactivity monitor are outlined. The overall versatility of the chromatograph has been demonstrated by the separation of a range of 3 H- and 14 C-labelled gibberellins and gibberellin precursors. The system also has considerable potential for the analysis of abscisic acid and acidic and neutral indoles. (author)

  8. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  9. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  10. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  11. Robust and sensitive analysis of mouse knockout phenotypes.

    Directory of Open Access Journals (Sweden)

    Natasha A Karp

    Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.

  12. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  13. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    Science.gov (United States)

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  14. Symmetry-Breaking as a Paradigm to Design Highly-Sensitive Sensor Systems

    Directory of Open Access Journals (Sweden)

    Antonio Palacios

    2015-06-01

    Full Text Available A large class of dynamic sensors have nonlinear input-output characteristics, often corresponding to a bistable potential energy function that controls the evolution of the sensor dynamics. These sensors include magnetic field sensors, e.g., the simple fluxgate magnetometer and the superconducting quantum interference device (SQUID, ferroelectric sensors and mechanical sensors, e.g., acoustic transducers, made with piezoelectric materials. Recently, the possibilities offered by new technologies and materials in realizing miniaturized devices with improved performance have led to renewed interest in a new generation of inexpensive, compact and low-power fluxgate magnetometers and electric-field sensors. In this article, we review the analysis of an alternative approach: a symmetry-based design for highly-sensitive sensor systems. The design incorporates a network architecture that produces collective oscillations induced by the coupling topology, i.e., which sensors are coupled to each other. Under certain symmetry groups, the oscillations in the network emerge via an infinite-period bifurcation, so that at birth, they exhibit a very large period of oscillation. This characteristic renders the oscillatory wave highly sensitive to symmetry-breaking effects, thus leading to a new detection mechanism. Model equations and bifurcation analysis are discussed in great detail. Results from experimental works on networks of fluxgate magnetometers are also included.

  15. High?Sensitivity Troponin: A Clinical Blood Biomarker for Staging Cardiomyopathy in Fabry Disease

    OpenAIRE

    2016-01-01

    Background High?sensitivity troponin (hs?TNT), a biomarker of myocardial damage, might be useful for assessing fibrosis in Fabry cardiomyopathy. We performed a prospective analysis of hs?TNT as a biomarker for myocardial changes in Fabry patients and a retrospective longitudinal follow?up study to assess longitudinal hs?TNT changes relative to fibrosis and cardiomyopathy progression. Methods and Results For the prospective analysis, hs?TNT from 75 consecutive patients with genetically confirm...

  16. Development of a method for comprehensive and quantitative analysis of plant hormones by highly sensitive nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry

    International Nuclear Information System (INIS)

    Izumi, Yoshihiro; Okazawa, Atsushi; Bamba, Takeshi; Kobayashi, Akio; Fukusaki, Eiichiro

    2009-01-01

    In recent plant hormone research, there is an increased demand for a highly sensitive and comprehensive analytical approach to elucidate the hormonal signaling networks, functions, and dynamics. We have demonstrated the high sensitivity of a comprehensive and quantitative analytical method developed with nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry (LC-ESI-IT-MS/MS) under multiple-reaction monitoring (MRM) in plant hormone profiling. Unlabeled and deuterium-labeled isotopomers of four classes of plant hormones and their derivatives, auxins, cytokinins (CK), abscisic acid (ABA), and gibberellins (GA), were analyzed by this method. The optimized nanoflow-LC-ESI-IT-MS/MS method showed ca. 5-10-fold greater sensitivity than capillary-LC-ESI-IT-MS/MS, and the detection limits (S/N = 3) of several plant hormones were in the sub-fmol range. The results showed excellent linearity (R 2 values of 0.9937-1.0000) and reproducibility of elution times (relative standard deviations, RSDs, <1.1%) and peak areas (RSDs, <10.7%) for all target compounds. Further, sample purification using Oasis HLB and Oasis MCX cartridges significantly decreased the ion-suppressing effects of biological matrix as compared to the purification using only Oasis HLB cartridge. The optimized nanoflow-LC-ESI-IT-MS/MS method was successfully used to analyze endogenous plant hormones in Arabidopsis and tobacco samples. The samples used in this analysis were extracted from only 17 tobacco dry seeds (1 mg DW), indicating that the efficiency of analysis of endogenous plant hormones strongly depends on the detection sensitivity of the method. Our analytical approach will be useful for in-depth studies on complex plant hormonal metabolism.

  17. Development of a method for comprehensive and quantitative analysis of plant hormones by highly sensitive nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, Yoshihiro; Okazawa, Atsushi; Bamba, Takeshi; Kobayashi, Akio [Department of Biotechnology, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871 (Japan); Fukusaki, Eiichiro, E-mail: fukusaki@bio.eng.osaka-u.ac.jp [Department of Biotechnology, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871 (Japan)

    2009-08-26

    In recent plant hormone research, there is an increased demand for a highly sensitive and comprehensive analytical approach to elucidate the hormonal signaling networks, functions, and dynamics. We have demonstrated the high sensitivity of a comprehensive and quantitative analytical method developed with nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry (LC-ESI-IT-MS/MS) under multiple-reaction monitoring (MRM) in plant hormone profiling. Unlabeled and deuterium-labeled isotopomers of four classes of plant hormones and their derivatives, auxins, cytokinins (CK), abscisic acid (ABA), and gibberellins (GA), were analyzed by this method. The optimized nanoflow-LC-ESI-IT-MS/MS method showed ca. 5-10-fold greater sensitivity than capillary-LC-ESI-IT-MS/MS, and the detection limits (S/N = 3) of several plant hormones were in the sub-fmol range. The results showed excellent linearity (R{sup 2} values of 0.9937-1.0000) and reproducibility of elution times (relative standard deviations, RSDs, <1.1%) and peak areas (RSDs, <10.7%) for all target compounds. Further, sample purification using Oasis HLB and Oasis MCX cartridges significantly decreased the ion-suppressing effects of biological matrix as compared to the purification using only Oasis HLB cartridge. The optimized nanoflow-LC-ESI-IT-MS/MS method was successfully used to analyze endogenous plant hormones in Arabidopsis and tobacco samples. The samples used in this analysis were extracted from only 17 tobacco dry seeds (1 mg DW), indicating that the efficiency of analysis of endogenous plant hormones strongly depends on the detection sensitivity of the method. Our analytical approach will be useful for in-depth studies on complex plant hormonal metabolism.

  18. Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

    International Nuclear Information System (INIS)

    Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

    2012-01-01

    Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.

  19. Highly sensitive electrochemical determination of 1-naphthol based on high-index facet SnO2 modified electrode

    International Nuclear Information System (INIS)

    Huang Xiaofeng; Zhao Guohua; Liu Meichuan; Li Fengting; Qiao Junlian; Zhao Sichen

    2012-01-01

    Highlights: ► It is the first time to employ high-index faceted SnO 2 in electrochemical analysis. ► High-index faceted SnO 2 has excellent electrochemical activity toward 1-naphthol. ► Highly sensitive determination of 1-naphthol is realized on high-index faceted SnO 2 . ► The detection limit of 1-naphthol is as low as 5 nM on high-index faceted SnO 2 . ► Electro-oxidation kinetics for 1-napthol on the novel electrode is discussed. - Abstract: SnO 2 nanooctahedron with {2 2 1} high-index facet (HIF) was synthesized by a simple hydrothermal method, and was firstly employed to sensitive electrochemical sensing of a typical organic pollutant, 1-naphthol (1-NAP). The constructed HIF SnO 2 modified glassy carbon electrode (HIF SnO 2 /GCE) possessed advantages of large effective electrode area, high electron transfer rate, and low charge transfer resistance. These improved electrochemical properties allowed the high electrocatalytic performance, high effective active sites and high adsorption capacity of 1-NAP on HIF SnO 2 /GCE. Cyclic voltammetry (CV) results showed that the electrochemical oxidation of 1-NAP obeyed a two-electron transfer process and the electrode reaction was under diffusion control on HIF SnO 2 /GCE. By adopting differential pulse voltammetry (DPV), electrochemical detection of 1-NAP was conducted on HIF SnO 2 /GCE with a limit of detection as low as 5 nM, which was relatively low compared to the literatures. The electrode also illustrated good stability in comparison with those reported value. Satisfactory results were obtained with average recoveries in the range of 99.7–103.6% in the real water sample detection. A promising device for the electrochemical detection of 1-NAP with high sensitivity has therefore been provided.

  20. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  1. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  2. Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.

    Science.gov (United States)

    Gnambs, Timo; Kaspar, Kai

    2015-12-01

    In surveys, individuals tend to misreport behaviors that are in contrast to prevalent social norms or regulations. Several design features of the survey procedure have been suggested to counteract this problem; particularly, computerized surveys are supposed to elicit more truthful responding. This assumption was tested in a meta-analysis of survey experiments reporting 460 effect sizes (total N =125,672). Self-reported prevalence rates of several sensitive behaviors for which motivated misreporting has been frequently observed were compared across self-administered paper-and-pencil versus computerized surveys. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents. Moderator analyses did not identify interviewer effects or benefits of audio-enhanced computer surveys. The meta-analysis highlighted the advantages of computerized survey modes for the assessment of sensitive topics.

  3. Highly Sensitive and High-Throughput Method for the Analysis of Bisphenol Analogues and Their Halogenated Derivatives in Breast Milk.

    Science.gov (United States)

    Niu, Yumin; Wang, Bin; Zhao, Yunfeng; Zhang, Jing; Shao, Bing

    2017-12-06

    The structural analogs of bisphenol A (BPA) and their halogenated derivatives (together termed BPs) have been found in the environment, food, and even the human body. Limited research showed that some of them exhibited toxicities that were similar to or even greater than that of BPA. Therefore, adverse health effects for BPs were expected for humans with low-dose exposure in early life. Breast milk is an excellent matrix and could reflect fetuses' and babies' exposure to contaminants. Some of the emerging BPs may present with trace or ultratrace levels in humans. However, existing analytical methods for breast milk cannot quantify these BPs simultaneously with high sensitivity using a small sampling weight, which is important for human biomonitoring studies. In this paper, a method based on Bond Elut Enhanced Matrix Removal-Lipid purification, pyridine-3-sulfonyl chloride derivatization, and liquid chromatography electrospray tandem mass spectrometry was developed. The method requires only a small quantity of sample (200 μL) and allowed for the simultaneous determination of 24 BPs in breast milk with ultrahigh sensitivity. The limits of quantitation of the proposed method were 0.001-0.200 μg L -1 , which were 1-6.7 times lower than the only study for the simultaneous analysis of bisphenol analogs in breast milk based on a 3 g sample weight. The mean recoveries ranged from 86.11% to 119.05% with relative standard deviation (RSD) ≤ 19.5% (n = 6). Matrix effects were within 20% with RSD bisphenol F (BPF), bisphenol S (BPS), and bisphenol AF (BPAF) were detected. BPA was still the dominant BP, followed by BPF. This is the first report describing the occurrence of BPF and BPAF in breast milk.

  4. High-field modulated ion-selective field-effect-transistor (FET) sensors with sensitivity higher than the ideal Nernst sensitivity.

    Science.gov (United States)

    Chen, Yi-Ting; Sarangadharan, Indu; Sukesan, Revathi; Hseih, Ching-Yen; Lee, Geng-Yen; Chyi, Jen-Inn; Wang, Yu-Lin

    2018-05-29

    Lead ion selective membrane (Pb-ISM) coated AlGaN/GaN high electron mobility transistors (HEMT) was used to demonstrate a whole new methodology for ion-selective FET sensors, which can create ultra-high sensitivity (-36 mV/log [Pb 2+ ]) surpassing the limit of ideal sensitivity (-29.58 mV/log [Pb 2+ ]) in a typical Nernst equation for lead ion. The largely improved sensitivity has tremendously reduced the detection limit (10 -10  M) for several orders of magnitude of lead ion concentration compared to typical ion-selective electrode (ISE) (10 -7  M). The high sensitivity was obtained by creating a strong filed between the gate electrode and the HEMT channel. Systematical investigation was done by measuring different design of the sensor and gate bias, indicating ultra-high sensitivity and ultra-low detection limit obtained only in sufficiently strong field. Theoretical study in the sensitivity consistently agrees with the experimental finding and predicts the maximum and minimum sensitivity. The detection limit of our sensor is comparable to that of Inductively-Coupled-Plasma Mass Spectrum (ICP-MS), which also has detection limit near 10 -10  M.

  5. Highly sensitive refractive index fiber inline Mach-Zehnder interferometer fabricated by femtosecond laser micromachining and chemical etching

    Science.gov (United States)

    Sun, Xiao-Yan; Chu, Dong-Kai; Dong, Xin-Ran; Zhou, Chu; Li, Hai-Tao; Luo-Zhi; Hu, You-Wang; Zhou, Jian-Ying; Cong-Wang; Duan, Ji-An

    2016-03-01

    A High sensitive refractive index (RI) sensor based on Mach-Zehnder interferometer (MZI) in a conventional single-mode optical fiber is proposed, which is fabricated by femtosecond laser transversal-scanning inscription method and chemical etching. A rectangular cavity structure is formed in part of fiber core and cladding interface. The MZI sensor shows excellent refractive index sensitivity and linearity, which exhibits an extremely high RI sensitivity of -17197 nm/RIU (refractive index unit) with the linearity of 0.9996 within the refractive index range of 1.3371-1.3407. The experimental results are consistent with theoretical analysis.

  6. Using sensitivity analysis to identify key factors for the propagation of a plant epidemic.

    Science.gov (United States)

    Rimbaud, Loup; Bruchou, Claude; Dallot, Sylvie; Pleydell, David R J; Jacquot, Emmanuel; Soubeyrand, Samuel; Thébaud, Gaël

    2018-01-01

    Identifying the key factors underlying the spread of a disease is an essential but challenging prerequisite to design management strategies. To tackle this issue, we propose an approach based on sensitivity analyses of a spatiotemporal stochastic model simulating the spread of a plant epidemic. This work is motivated by the spread of sharka, caused by plum pox virus , in a real landscape. We first carried out a broad-range sensitivity analysis, ignoring any prior information on six epidemiological parameters, to assess their intrinsic influence on model behaviour. A second analysis benefited from the available knowledge on sharka epidemiology and was thus restricted to more realistic values. The broad-range analysis revealed that the mean duration of the latent period is the most influential parameter of the model, whereas the sharka-specific analysis uncovered the strong impact of the connectivity of the first infected orchard. In addition to demonstrating the interest of sensitivity analyses for a stochastic model, this study highlights the impact of variation ranges of target parameters on the outcome of a sensitivity analysis. With regard to sharka management, our results suggest that sharka surveillance may benefit from paying closer attention to highly connected patches whose infection could trigger serious epidemics.

  7. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  8. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  9. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  10. Silicon nanowire structures as high-sensitive pH-sensors

    International Nuclear Information System (INIS)

    Belostotskaya, S O; Chuyko, O V; Kuznetsov, A E; Kuznetsov, E V; Rybachek, E N

    2012-01-01

    Sensitive elements for pH-sensors created on silicon nanostructures were researched. Silicon nanostructures have been used as ion-sensitive field effect transistor (ISFET) for the measurement of solution pH. Silicon nanostructures have been fabricated by 'top-down' approach and have been studied as pH sensitive elements. Nanowires have the higher sensitivity. It was shown, that sensitive element, which is made of 'one-dimensional' silicon nanostructure have bigger pH-sensitivity as compared with 'two-dimensional' structure. Integrated element formed from two p- and n-type nanowire ISFET ('inverter') can be used as high sensitivity sensor for local relative change [H+] concentration in very small volume.

  11. A new method of removing the high value feedback resistor in the charge sensitive preamplifier

    International Nuclear Information System (INIS)

    Xi Deming

    1993-01-01

    A new method of removing the high value feedback resistor in the charge sensitive preamplifier is introduced. The circuit analysis of this novel design is described and the measured performances of a practical circuit are provided

  12. Development of a highly sensitive lithium fluoride thermoluminescence dosimeter

    International Nuclear Information System (INIS)

    Moraes da Silva, Teresinha de; Campos, Leticia Lucente

    1995-01-01

    In recent times, LiF: Mg, Cu, P thermoluminescent phosphor has been increasingly in use for radiation monitoring due its high sensitivity and ease of preparation. The Dosimetric Materials Production Laboratory of IPEN, (Nuclear Energy Institute) has developed a simple method to obtain high sensitivity LiF. The preparation method is described. (author). 4 refs., 1 fig., 1 tab

  13. Phase sensitive diffraction sensor for high sensitivity refractive index measurement

    Science.gov (United States)

    Kumawat, Nityanand; Varma, Manoj; Kumar, Sunil

    2018-02-01

    In this study a diffraction based sensor has been developed for bio molecular sensing applications and performing assays in real time. A diffraction grating fabricated on a glass substrate produced diffraction patterns both in transmission and reflection when illuminated by a laser diode. We used zeroth order I(0,0) as reference and first order I(0,1) as signal channel and conducted ratiometric measurements that reduced noise by more than 50 times. The ratiometric approach resulted in a very simple instrumentation with very high sensitivity. In the past, we have shown refractive index measurements both for bulk and surface adsorption using the diffractive self-referencing approach. In the current work we extend the same concept to higher diffraction orders. We have considered order I(0,1) and I(1,1) and performed ratiometric measurements I(0,1)/I(1,1) to eliminate the common mode fluctuations. Since orders I(0,1) and I(1,1) behaved opposite to each other, the resulting ratio signal amplitude increased more than twice compared to our previous results. As a proof of concept we used different salt concentrations in DI water. Increased signal amplitude and improved fluid injection system resulted in more than 4 times improvement in detection limit, giving limit of detection 1.3×10-7 refractive index unit (RIU) compared to our previous results. The improved refractive index sensitivity will help significantly for high sensitivity label free bio sensing application in a very cost-effective and simple experimental set-up.

  14. Measurement system for high-sensitivity LIBS analysis using ICCD camera in LabVIEW environment

    International Nuclear Information System (INIS)

    Zaytsev, S M; Popov, A M; Zorov, N B; Labutin, T A

    2014-01-01

    A measurement system based on ultrafast (up to 10 ns time resolution) intensified CCD detector ''Nanogate-2V'' (Nanoscan, Russia) was developed for high-sensitivity analysis by Laser-Induced Breakdown Spectrometry (LIBS). LabVIEW environment provided a high level of compatibility with variety of electronic instruments and an easy development of user interface, while Visual Studio environment was used for creation of LabVIEW compatible dll library with the use of ''Nanogate-2V'' SDK. The program for camera management and laser-induced plasma spectra registration was created with the use of Call Library Node in LabVIEW. An algorithm of integration of the second device ADC ''PCI-9812'' (ADLINK) to the measurement system was proposed and successfully implemented. This allowed simultaneous registration of emission and acoustic signals under laser ablation. The measured resolving power of spectrometer-ICCD system was equal to 12000 at 632 nm. An electron density of laser plasma was estimated with the use of H-α Balmer line. Steel spectra obtained at different delays were used for selection of the optimal conditions for manganese analytical signal registration. The feature of accumulation of spectra from several laser pulses was shown. The accumulation allowed reliable observation of silver signal at 328.07 nm in the LIBS spectra of soil (C Ag = 4.5 ppm). Finally, the correlation between acoustic and emission signals of plasma was found. Thus, technical possibilities of the developed LIBS system were demonstrated both for plasma diagnostics and analytical measurements

  15. Heterogeneous catalysis in highly sensitive microreactors

    DEFF Research Database (Denmark)

    Olsen, Jakob Lind

    This thesis present a highly sensitive silicon microreactor and examples of its use in studying catalysis. The experimental setup built for gas handling and temperature control for the microreactor is described. The implementation of LabVIEW interfacing for all the experimental parts makes...

  16. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  17. A Highly Sensitive Multicommuted Flow Analysis Procedure for Photometric Determination of Molybdenum in Plant Materials without a Solvent Extraction Step

    Directory of Open Access Journals (Sweden)

    Felisberto G. Santos

    2017-01-01

    Full Text Available A highly sensitive analytical procedure for photometric determination of molybdenum in plant materials was developed and validated. This procedure is based on the reaction of Mo(V with thiocyanate ions (SCN− in acidic medium to form a compound that can be monitored at 474 nm and was implemented employing a multicommuted flow analysis setup. Photometric detection was performed using an LED-based photometer coupled to a flow cell with a long optical path length (200 mm to achieve high sensitivity, allowing Mo(V determination at a level of μg L−1 without the use of an organic solvent extraction step. After optimization of operational conditions, samples of digested plant materials were analyzed employing the proposed procedure. The accuracy was assessed by comparing the obtained results with those of a reference method, with an agreement observed at 95% confidence level. In addition, a detection limit of 9.1 μg L−1, a linear response (r=0.9969 over the concentration range of 50–500 μg L−1, generation of only 3.75 mL of waste per determination, and a sampling rate of 51 determinations per hour were achieved.

  18. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf

    2013-10-10

    Background Although transcription in mammalian genomes can initiate from various genomic positions (e.g., 3′UTR, coding exons, etc.), most locations on genomes are not prone to transcription initiation. It is of practical and theoretical interest to be able to estimate such collections of non-TSS locations (NTLs). The identification of large portions of NTLs can contribute to better focusing the search for TSS locations and thus contribute to promoter and gene finding. It can help in the assessment of 5′ completeness of expressed sequences, contribute to more successful experimental designs, as well as more accurate gene annotation. Methodology Using comprehensive collections of Cap Analysis of Gene Expression (CAGE) and other transcript data from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly unlikely to harbor transcription start sites (TSSs). The properties of the immediate genomic neighborhood of 98,682 accurately determined mouse and 113,814 human TSSs are used to determine features that distinguish genomic transcription initiation locations from those that are not likely to initiate transcription. In our algorithm we utilize various constraining properties of features identified in the upstream and downstream regions around TSSs, as well as statistical analyses of these surrounding regions. Conclusions Our analysis of human chromosomes 4, 21 and 22 estimates ~46%, ~41% and ~27% of these chromosomes, respectively, as being NTLs. This suggests that on average more than 40% of the human genome can be expected to be highly unlikely to initiate transcription. Our method represents the first one that utilizes high-sensitivity TSS prediction to identify, with high accuracy, large portions of mammalian genomes as NTLs. The server with our algorithm implemented is

  19. The selectively bred high alcohol sensitivity (HAS) and low alcohol sensitivity (LAS) rats differ in sensitivity to nicotine.

    Science.gov (United States)

    de Fiebre, NancyEllen C; Dawson, Ralph; de Fiebre, Christopher M

    2002-06-01

    Studies in rodents selectively bred to differ in alcohol sensitivity have suggested that nicotine and ethanol sensitivities may cosegregate during selective breeding. This suggests that ethanol and nicotine sensitivities may in part be genetically correlated. Male and female high alcohol sensitivity (HAS), control alcohol sensitivity, and low alcohol sensitivity (LAS) rats were tested for nicotine-induced alterations in locomotor activity, body temperature, and seizure activity. Plasma and brain levels of nicotine and its primary metabolite, cotinine, were measured in these animals, as was the binding of [3H]cytisine, [3H]epibatidine, and [125I]alpha-bungarotoxin in eight brain regions. Both replicate HAS lines were more sensitive to nicotine-induced locomotor activity depression than the replicate LAS lines. No consistent HAS/LAS differences were seen on other measures of nicotine sensitivity; however, females were more susceptible to nicotine-induced seizures than males. No HAS/LAS differences in nicotine or cotinine levels were seen, nor were differences seen in the binding of nicotinic ligands. Females had higher levels of plasma cotinine and brain nicotine than males but had lower brain cotinine levels than males. Sensitivity to a specific action of nicotine cosegregates during selective breeding for differential sensitivity to a specific action of ethanol. The differential sensitivity of the HAS/LAS rats is due to differences in central nervous system sensitivity and not to pharmacokinetic differences. The differential central nervous system sensitivity cannot be explained by differences in the numbers of nicotinic receptors labeled in ligand-binding experiments. The apparent genetic correlation between ethanol and nicotine sensitivities suggests that common genes modulate, in part, the actions of both ethanol and nicotine and may explain the frequent coabuse of these agents.

  20. Rapid analysis of heterogeneously methylated DNA using digital methylation-sensitive high resolution melting: application to the CDKN2B (p15) gene

    DEFF Research Database (Denmark)

    Candiloro, Ida Lm; Mikeska, Thomas; Hokland, Peter

    2008-01-01

    ABSTRACT: BACKGROUND: Methylation-sensitive high resolution melting (MS-HRM) methodology is able to recognise heterogeneously methylated sequences by their characteristic melting profiles. To further analyse heterogeneously methylated sequences, we adopted a digital approach to MS-HRM (dMS-HRM) t......ABSTRACT: BACKGROUND: Methylation-sensitive high resolution melting (MS-HRM) methodology is able to recognise heterogeneously methylated sequences by their characteristic melting profiles. To further analyse heterogeneously methylated sequences, we adopted a digital approach to MS-HRM (d......MS-HRM) that involves the amplification of single templates after limiting dilution to quantify and to determine the degree of methylation. We used this approach to study methylation of the CDKN2B (p15) cell cycle progression inhibitor gene which is inactivated by DNA methylation in haematological malignancies...... the methylated alleles and assess the degree of methylation. Direct sequencing of selected dMS-HRM products was used to determine the exact DNA methylation pattern and confirmed the degree of methylation estimated by dMS-HRM. CONCLUSION: dMS-HRM is a powerful technique for the analysis of methylation in CDKN2B...

  1. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  2. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  3. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  4. Analysis of Hydrological Sensitivity for Flood Risk Assessment

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Sharma

    2018-02-01

    Full Text Available In order for the Indian government to maximize Integrated Water Resource Management (IWRM, the Brahmaputra River has played an important role in the undertaking of the Pilot Basin Study (PBS due to the Brahmaputra River’s annual regional flooding. The selected Kulsi River—a part of Brahmaputra sub-basin—experienced severe floods in 2007 and 2008. In this study, the Rainfall-Runoff-Inundation (RRI hydrological model was used to simulate the recent historical flood in order to understand and improve the integrated flood risk management plan. The ultimate objective was to evaluate the sensitivity of hydrologic simulation using different Digital Elevation Model (DEM resources, coupled with DEM smoothing techniques, with a particular focus on the comparison of river discharge and flood inundation extent. As a result, the sensitivity analysis showed that, among the input parameters, the RRI model is highly sensitive to Manning’s roughness coefficient values for flood plains, followed by the source of the DEM, and then soil depth. After optimizing its parameters, the simulated inundation extent showed that the smoothing filter was more influential than its simulated discharge at the outlet. Finally, the calibrated and validated RRI model simulations agreed well with the observed discharge and the Moderate Imaging Spectroradiometer (MODIS-detected flood extents.

  5. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  6. A high sensitivity nanomaterial based SAW humidity sensor

    Energy Technology Data Exchange (ETDEWEB)

    Wu, T-T; Chou, T-H [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Chen, Y-Y [Department of Mechanical Engineering, Tatung University, Taipei 104, Taiwan (China)], E-mail: wutt@ndt.iam.ntu.edu.tw

    2008-04-21

    In this paper, a highly sensitive humidity sensor is reported. The humidity sensor is configured by a 128{sup 0}YX-LiNbO{sub 3} based surface acoustic wave (SAW) resonator whose operating frequency is at 145 MHz. A dual delay line configuration is realized to eliminate external temperature fluctuations. Moreover, for nanostructured materials possessing high surface-to-volume ratio, large penetration depth and fast charge diffusion rate, camphor sulfonic acid doped polyaniline (PANI) nanofibres are synthesized by the interfacial polymerization method and further deposited on the SAW resonator as selective coating to enhance sensitivity. The humidity sensor is used to measure various relative humidities in the range 5-90% at room temperature. Results show that the PANI nanofibre based SAW humidity sensor exhibits excellent sensitivity and short-term repeatability.

  7. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  8. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  9. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    Science.gov (United States)

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  10. Application of Wielandt method in continuous-energy nuclear data sensitivity analysis with RMC code

    International Nuclear Information System (INIS)

    Qiu Yishu; Wang Kan; She Ding

    2015-01-01

    The Iterated Fission Probability (IFP) method, an accurate method to estimate adjoint-weighted quantities in the continuous-energy Monte Carlo criticality calculations, has been widely used for calculating kinetic parameters and nuclear data sensitivity coefficients. By using a strategy of waiting, however, this method faces the challenge of high memory usage to store the tallies of original contributions which size is proportional to the number of particle histories in each cycle. Recently, the Wielandt method, applied by Monte Carlo code McCARD to calculate kinetic parameters, estimates adjoint fluxes in a single particle history and thus can save memory usage. In this work, the Wielandt method has been applied in Rector Monte Carlo code RMC for nuclear data sensitivity analysis. The methodology and algorithm of applying Wielandt method in estimation of adjoint-based sensitivity coefficients are discussed. Verification is performed by comparing the sensitivity coefficients calculated by Wielandt method with analytical solutions, those computed by IFP method which is also implemented in RMC code for sensitivity analysis, and those from the multi-group TSUNAMI-3D module in SCALE code package. (author)

  11. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  12. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  13. Highly sensitive microcalorimeters for radiation research

    International Nuclear Information System (INIS)

    Avaev, V.N.; Demchuk, B.N.; Ioffe, L.A.; Efimov, E.P.

    1984-01-01

    Calorimetry is used in research at various types of nuclear-physics installations to obtain information on the quantitative and qualitative composition of ionizing radiation in a reactor core and in the surrounding layers of the biological shield. In this paper, the authors examine the characteristics of highly sensitive microcalorimeters with modular semiconductor heat pickups designed for operation in reactor channels. The microcalorimeters have a thin-walled aluminum housing on whose inner surface modular heat pickups are placed radially as shown here. The results of measurements of the temperature dependence of the sensitivity of the microcalorimeters are shown. The results of measuring the sensitivity of a PMK-2 microcalorimeter assembly as a function of integrated neutron flux for three energy intervals and the adsorbed gamma energy are shown. In order to study specimens with different shapes and sizes, microcalorimeters with chambers in the form of cylinders and a parallelepiped were built and tested

  14. Mass Spectrometry-based Assay for High Throughput and High Sensitivity Biomarker Verification

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Xuejiang; Tang, Keqi

    2017-06-14

    Searching for disease specific biomarkers has become a major undertaking in the biomedical research field as the effective diagnosis, prognosis and treatment of many complex human diseases are largely determined by the availability and the quality of the biomarkers. A successful biomarker as an indicator to a specific biological or pathological process is usually selected from a large group of candidates by a strict verification and validation process. To be clinically useful, the validated biomarkers must be detectable and quantifiable by the selected testing techniques in their related tissues or body fluids. Due to its easy accessibility, protein biomarkers would ideally be identified in blood plasma or serum. However, most disease related protein biomarkers in blood exist at very low concentrations (<1ng/mL) and are “masked” by many none significant species at orders of magnitude higher concentrations. The extreme requirements of measurement sensitivity, dynamic range and specificity make the method development extremely challenging. The current clinical protein biomarker measurement primarily relies on antibody based immunoassays, such as ELISA. Although the technique is sensitive and highly specific, the development of high quality protein antibody is both expensive and time consuming. The limited capability of assay multiplexing also makes the measurement an extremely low throughput one rendering it impractical when hundreds to thousands potential biomarkers need to be quantitatively measured across multiple samples. Mass spectrometry (MS)-based assays have recently shown to be a viable alternative for high throughput and quantitative candidate protein biomarker verification. Among them, the triple quadrupole MS based assay is the most promising one. When it is coupled with liquid chromatography (LC) separation and electrospray ionization (ESI) source, a triple quadrupole mass spectrometer operating in a special selected reaction monitoring (SRM) mode

  15. Uncertainty and sensitivity analysis in the 2008 performance assessment for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.

    2010-01-01

    Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, an extensive performance assessment (PA) for the YM repository was completed in 2008 (1) and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository (2). This presentation provides an overview of the conceptual and computational structure of the indicated PA (hereafter referred to as the 2008 YM PA) and the roles that uncertainty analysis and sensitivity analysis play in this structure.

  16. Palladium Gate All Around - Hetero Dielectric -Tunnel FET based highly sensitive Hydrogen Gas Sensor

    Science.gov (United States)

    Madan, Jaya; Chaujar, Rishu

    2016-12-01

    The paper presents a novel highly sensitive Hetero-Dielectric-Gate All Around Tunneling FET (HD-GAA-TFET) based Hydrogen Gas Sensor, incorporating the advantages of band to band tunneling (BTBT) mechanism. Here, the Palladium supported silicon dioxide is used as a sensing media and sensing relies on the interaction of hydrogen with Palladium-SiO2-Si. The high surface to volume ratio in the case of cylindrical GAA structure enhances the fortuities for surface reactions between H2 gas and Pd, and thus improves the sensitivity and stability of the sensor. Behaviour of the sensor in presence of hydrogen and at elevated temperatures is discussed. The conduction path of the sensor which is dependent on sensors radius has also been varied for the optimized sensitivity and static performance analysis of the sensor where the proposed design exhibits a superior performance in terms of threshold voltage, subthreshold swing, and band to band tunneling rate. Stability of the sensor with respect to temperature affectability has also been studied, and it is found that the device is reasonably stable and highly sensitive over the bearable temperature range. The successful utilization of HD-GAA-TFET in gas sensors may open a new door for the development of novel nanostructure gas sensing devices.

  17. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  18. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  19. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  20. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  1. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  2. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  3. High sensitivity of quick view capsule endoscopy for detection of small bowel Crohn's disease

    DEFF Research Database (Denmark)

    Halling, Morten Lee; Nathan, Torben; Kjeldsen, Jens

    2014-01-01

    Capsule endoscopy (CE) has a high sensitivity for diagnosing small bowel Crohn's disease, but video analysis is time consuming. The quick view (qv) function is an effective tool to reduce time consumption. The aim of this study was to determine the rate of missed small bowel ulcerations with qv-C...

  4. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  5. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    International Nuclear Information System (INIS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-01-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of 0.1mmx0.1mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection (LDA sfs ). With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of LDA sfs , although the latter provided a higher total area (A z ) under the ROC curve. By setting an appropriate decision threshold, the high-sensitivity classifier and LDA sfs correctly

  6. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  7. Highly sensitive analysis of boron and lithium in aqueous solution using dual-pulse laser-induced breakdown spectroscopy.

    Science.gov (United States)

    Lee, Dong-Hyoung; Han, Sol-Chan; Kim, Tae-Hyeong; Yun, Jong-Il

    2011-12-15

    We have applied a dual-pulse laser-induced breakdown spectroscopy (DP-LIBS) to sensitively detect concentrations of boron and lithium in aqueous solution. Sequential laser pulses from two separate Q-switched Nd:YAG lasers at 532 nm wavelength have been employed to generate laser-induced plasma on a water jet. For achieving sensitive elemental detection, the optimal timing between two laser pulses was investigated. The optimum time delay between two laser pulses for the B atomic emission lines was found to be less than 3 μs and approximately 10 μs for the Li atomic emission line. Under these optimized conditions, the detection limit was attained in the range of 0.8 ppm for boron and 0.8 ppb for lithium. In particular, the sensitivity for detecting boron by excitation of laminar liquid jet was found to be excellent by nearly 2 orders of magnitude compared with 80 ppm reported in the literature. These sensitivities of laser-induced breakdown spectroscopy are very practical for the online elemental analysis of boric acid and lithium hydroxide serving as neutron absorber and pH controller in the primary coolant water of pressurized water reactors, respectively.

  8. Sensitivity analysis of VERA-CS and FRAPCON coupling in a multiphysics environment

    International Nuclear Information System (INIS)

    Blakely, Cole; Zhang, Hongbin; Ban, Heng

    2018-01-01

    Highlights: •VERA-CS and FRAPCON coupling. •Uncertainty quantification and sensitivity analysis for coupled VERA-CS and FRAPCON simulations in a multiphysics environment LOTUS. -- Abstract: A demonstration and description of the LOCA Toolkit for US light water reactors (LOTUS) is presented. Through LOTUS, the core simulator VERA-CS developed by CASL is coupled with the fuel performance code FRAPCON. The coupling is performed with consistent uncertainty propagation with all model inconsistencies being well-documented. Monte Carlo sampling is performed on a single 17 × 17 fuel assembly with a three cycle depletion case. Both uncertainty quantification (UQ) and sensitivity analysis (SA) are used at multiple states within the simulation to elucidate the behavior of minimum departure from nucleate boiling ratio (MDNBR), maximum fuel centerline temperature (MFCT), and gap conductance at peak power (GCPP). The SA metrics used are the Pearson correlation coefficient, Sobol sensitivity indices, and the density-based, delta moment independent measures. Results for MDNBR show consistency among all SA measures, as well for all states throughout the fuel lifecycle. MFCT results contain consistent rankings between SA measures, but show differences throughout the lifecycle. GCPP exhibits predominantly linear relations at low and high burnup, but highly nonlinear relations at intermediate burnup due to abrupt shifts between models. Such behavior is largely undetectable to traditional regression or variance-based methods and demonstrates the utility of density-based methods.

  9. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    Science.gov (United States)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  10. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States

    Directory of Open Access Journals (Sweden)

    Min-Uk Kim

    2018-05-01

    Full Text Available We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA tools. We used OCA tools Korea Offsite Risk Assessment (KORA and Areal Location of Hazardous Atmospheres (ALOHA in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH3, 35% hydrogen chloride (HCl, 50% hydrofluoric acid (HF, and 69% nitric acid (HNO3. The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  11. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States.

    Science.gov (United States)

    Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon

    2018-05-18

    We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  12. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  13. Analysis strategies for high-resolution UHF-fMRI data.

    Science.gov (United States)

    Polimeni, Jonathan R; Renvall, Ville; Zaretskaya, Natalia; Fischl, Bruce

    2018-03-01

    Functional MRI (fMRI) benefits from both increased sensitivity and specificity with increasing magnetic field strength, making it a key application for Ultra-High Field (UHF) MRI scanners. Most UHF-fMRI studies utilize the dramatic increases in sensitivity and specificity to acquire high-resolution data reaching sub-millimeter scales, which enable new classes of experiments to probe the functional organization of the human brain. This review article surveys advanced data analysis strategies developed for high-resolution fMRI at UHF. These include strategies designed to mitigate distortion and artifacts associated with higher fields in ways that attempt to preserve spatial resolution of the fMRI data, as well as recently introduced analysis techniques that are enabled by these extremely high-resolution data. Particular focus is placed on anatomically-informed analyses, including cortical surface-based analysis, which are powerful techniques that can guide each step of the analysis from preprocessing to statistical analysis to interpretation and visualization. New intracortical analysis techniques for laminar and columnar fMRI are also reviewed and discussed. Prospects for single-subject individualized analyses are also presented and discussed. Altogether, there are both specific challenges and opportunities presented by UHF-fMRI, and the use of proper analysis strategies can help these valuable data reach their full potential. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Fabrication and Characterization of High-Sensitivity Underwater Acoustic Multimedia Communication Devices with Thick Composite PZT Films

    Directory of Open Access Journals (Sweden)

    Jeng-Cheng Liu

    2017-01-01

    Full Text Available This paper presents a high-sensitivity hydrophone fabricated with a Microelectromechanical Systems (MEMS process using epitaxial thin films grown on silicon wafers. The evaluated resonant frequency was calculated through finite-element analysis (FEA. The hydrophone was designed, fabricated, and characterized by different measurements performed in a water tank, by using a pulsed sound technique with a sensitivity of −190 dB ± 2 dB for frequencies in the range 50–500 Hz. These results indicate the high-performance miniaturized acoustic devices, which can impact a variety of technological applications.

  15. Low Power and High Sensitivity MOSFET-Based Pressure Sensor

    International Nuclear Information System (INIS)

    Zhang Zhao-Hua; Ren Tian-Ling; Zhang Yan-Hong; Han Rui-Rui; Liu Li-Tian

    2012-01-01

    Based on the metal-oxide-semiconductor field effect transistor (MOSFET) stress sensitive phenomenon, a low power MOSFET pressure sensor is proposed. Compared with the traditional piezoresistive pressure sensor, the present pressure sensor displays high performances on sensitivity and power consumption. The sensitivity of the MOSFET sensor is raised by 87%, meanwhile the power consumption is decreased by 20%. (cross-disciplinary physics and related areas of science and technology)

  16. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  17. Single photon detector with high polarization sensitivity.

    Science.gov (United States)

    Guo, Qi; Li, Hao; You, LiXing; Zhang, WeiJun; Zhang, Lu; Wang, Zhen; Xie, XiaoMing; Qi, Ming

    2015-04-15

    Polarization is one of the key parameters of light. Most optical detectors are intensity detectors that are insensitive to the polarization of light. A superconducting nanowire single photon detector (SNSPD) is naturally sensitive to polarization due to its nanowire structure. Previous studies focused on producing a polarization-insensitive SNSPD. In this study, by adjusting the width and pitch of the nanowire, we systematically investigate the preparation of an SNSPD with high polarization sensitivity. Subsequently, an SNSPD with a system detection efficiency of 12% and a polarization extinction ratio of 22 was successfully prepared.

  18. A double-loop adaptive sampling approach for sensitivity-free dynamic reliability analysis

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2015-01-01

    Dynamic reliability measures reliability of an engineered system considering time-variant operation condition and component deterioration. Due to high computational costs, conducting dynamic reliability analysis at an early system design stage remains challenging. This paper presents a confidence-based meta-modeling approach, referred to as double-loop adaptive sampling (DLAS), for efficient sensitivity-free dynamic reliability analysis. The DLAS builds a Gaussian process (GP) model sequentially to approximate extreme system responses over time, so that Monte Carlo simulation (MCS) can be employed directly to estimate dynamic reliability. A generic confidence measure is developed to evaluate the accuracy of dynamic reliability estimation while using the MCS approach based on developed GP models. A double-loop adaptive sampling scheme is developed to efficiently update the GP model in a sequential manner, by considering system input variables and time concurrently in two sampling loops. The model updating process using the developed sampling scheme can be terminated once the user defined confidence target is satisfied. The developed DLAS approach eliminates computationally expensive sensitivity analysis process, thus substantially improves the efficiency of dynamic reliability analysis. Three case studies are used to demonstrate the efficacy of DLAS for dynamic reliability analysis. - Highlights: • Developed a novel adaptive sampling approach for dynamic reliability analysis. • POD Developed a new metric to quantify the accuracy of dynamic reliability estimation. • Developed a new sequential sampling scheme to efficiently update surrogate models. • Three case studies were used to demonstrate the efficacy of the new approach. • Case study results showed substantially enhanced efficiency with high accuracy

  19. Radioecological sensitivity

    International Nuclear Information System (INIS)

    Howard, Brenda J.; Strand, Per; Assimakopoulos, Panayotis

    2003-01-01

    After the release of radionuclide into the environment it is important to be able to readily identify major routes of radiation exposure, the most highly exposed individuals or populations and the geographical areas of most concern. Radioecological sensitivity can be broadly defined as the extent to which an ecosystem contributes to an enhanced radiation exposure to Man and biota. Radioecological sensitivity analysis integrates current knowledge on pathways, spatially attributes the underlying processes determining transfer and thereby identifies the most radioecologically sensitive areas leading to high radiation exposure. This identifies where high exposure may occur and why. A framework for the estimation of radioecological sensitivity with respect to humans is proposed and the various indicators by which it can be considered have been identified. These are (1) aggregated transfer coefficients (Tag), (2) action (and critical) loads, (3) fluxes and (4) individual exposure of humans. The importance of spatial and temporal consideration of all these outputs is emphasized. Information on the extent of radionuclide transfer and exposure to humans at different spatial scales is needed to reflect the spatial differences which can occur. Single values for large areas, such as countries, can often mask large variation within the country. Similarly, the relative importance of different pathways can change with time and therefore assessments of radiological sensitivity are needed over different time periods after contamination. Radioecological sensitivity analysis can be used in radiation protection, nuclear safety and emergency preparedness when there is a need to identify areas that have the potential of being of particular concern from a risk perspective. Prior identification of radioecologically sensitive areas and exposed individuals improve the focus of emergency preparedness and planning, and contribute to environmental impact assessment for future facilities. The

  20. NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method

    International Nuclear Information System (INIS)

    Ogbomo, Henry; Hahn, Anke; Geiler, Janina; Michaelis, Martin; Doerr, Hans Wilhelm; Cinatl, Jindrich

    2006-01-01

    The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe, and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients

  1. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  2. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  3. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  4. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  5. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  6. Position sensitive detection of neutrons in high radiation background field.

    Science.gov (United States)

    Vavrik, D; Jakubek, J; Pospisil, S; Vacik, J

    2014-01-01

    We present the development of a high-resolution position sensitive device for detection of slow neutrons in the environment of extremely high γ and e(-) radiation background. We make use of a planar silicon pixelated (pixel size: 55 × 55 μm(2)) spectroscopic Timepix detector adapted for neutron detection utilizing very thin (10)B converter placed onto detector surface. We demonstrate that electromagnetic radiation background can be discriminated from the neutron signal utilizing the fact that each particle type produces characteristic ionization tracks in the pixelated detector. Particular tracks can be distinguished by their 2D shape (in the detector plane) and spectroscopic response using single event analysis. A Cd sheet served as thermal neutron stopper as well as intensive source of gamma rays and energetic electrons. Highly efficient discrimination was successful even at very low neutron to electromagnetic background ratio about 10(-4).

  7. Performance of terahertz metamaterials as high-sensitivity sensor

    Science.gov (United States)

    He, Yanan; Zhang, Bo; Shen, Jingling

    2017-09-01

    A high-sensitivity sensor based on the resonant transmission characteristics of terahertz (THz) metamaterials was investigated, with the proposal and fabrication of rectangular bar arrays of THz metamaterials exhibiting a period of 180 μm on a 25 μm thick flexible polyimide. Varying the size of the metamaterial structure revealed that the length of the rectangular unit modulated the resonant frequency, which was verified by both experiment and simulation. The sensing characteristics upon varying the surrounding media in the sample were tested by simulation and experiment. Changing the surrounding medium from that of air to that of alcohol or oil produced resonant frequency redshifts of 80 GHz or 150 GHz, respectively, which indicates that the sensor possessed a high sensitivity of 667 GHz per unit of refractive index. Finally, the influence of the sample substrate thickness on the sensor sensitivity was investigated by simulation. It may be a reference for future sensor design.

  8. Comparison of the sensitivity of surface downward longwave radiation to changes in water vapor at two high elevation sites

    International Nuclear Information System (INIS)

    Chen, Yonghua; Naud, Catherine M; Rangwala, Imtiaz; Landry, Christopher C; Miller, James R

    2014-01-01

    Among the potential reasons for enhanced warming rates in many high elevation regions is the nonlinear relationship between surface downward longwave radiation (DLR) and specific humidity (q). In this study we use ground-based observations at two neighboring high elevation sites in Southwestern Colorado that have different local topography and are 1.3 km apart horizontally and 348 m vertically. We examine the spatial consistency of the sensitivities (partial derivatives) of DLR with respect to changes in q, and the sensitivities are obtained from the Jacobian matrix of a neural network analysis. Although the relationship between DLR and q is the same at both sites, the sensitivities are higher when q is smaller, which occurs more frequently at the higher elevation site. There is a distinct hourly distribution in the sensitivities at both sites especially for high sensitivity cases, although the range is greater at the lower elevation site. The hourly distribution of the sensitivities relates to that of q. Under clear skies during daytime, q is similar between the two sites, however under cloudy skies or at night, it is not. This means that the DLR–q sensitivities are similar at the two sites during daytime but not at night, and care must be exercised when using data from one site to infer the impact of water vapor feedbacks at another site, particularly at night. Our analysis suggests that care should be exercised when using the lapse rate adjustment to infill high frequency data in a complex topographical region, particularly when one of the stations is subject to cold air pooling as found here. (letter)

  9. Comparison of the Sensitivity of Surface Downward Longwave Radiation to Changes in Water Vapor at Two High Elevation Sites

    Science.gov (United States)

    Chen, Yonghua; Naud, Catherine M.; Rangwala, Imtiaz; Landry, Christopher C.; Miller, James R.

    2014-01-01

    Among the potential reasons for enhanced warming rates in many high elevation regions is the nonlinear relationship between surface downward longwave radiation (DLR) and specific humidity (q). In this study we use ground-based observations at two neighboring high elevation sites in Southwestern Colorado that have different local topography and are 1.3 kilometers apart horizontally and 348 meters vertically. We examine the spatial consistency of the sensitivities (partial derivatives) of DLR with respect to changes in q, and the sensitivities are obtained from the Jacobian matrix of a neural network analysis. Although the relationship between DLR and q is the same at both sites, the sensitivities are higher when q is smaller, which occurs more frequently at the higher elevation site. There is a distinct hourly distribution in the sensitivities at both sites especially for high sensitivity cases, although the range is greater at the lower elevation site. The hourly distribution of the sensitivities relates to that of q. Under clear skies during daytime, q is similar between the two sites, however under cloudy skies or at night, it is not. This means that the DLR-q sensitivities are similar at the two sites during daytime but not at night, and care must be exercised when using data from one site to infer the impact of water vapor feedbacks at another site, particularly at night. Our analysis suggests that care should be exercised when using the lapse rate adjustment to infill high frequency data in a complex topographical region, particularly when one of the stations is subject to cold air pooling as found here.

  10. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  11. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  12. Fast and sensitive trace analysis of malachite green using a surface-enhanced Raman microfluidic sensor.

    Science.gov (United States)

    Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee

    2007-05-08

    A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.

  13. Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion

    International Nuclear Information System (INIS)

    Garcia-Cabrejo, Oscar; Valocchi, Albert

    2014-01-01

    Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE

  14. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    Science.gov (United States)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  15. Nanowire-templated microelectrodes for high-sensitivity pH detection

    DEFF Research Database (Denmark)

    Antohe, V.A.; Radu, Adrian; Mátéfi-Tempfli, Mária

    2009-01-01

    A highly sensitive pH capacitive sensor has been designed by confined growth of vertically aligned nanowire arrays on interdigited microelectrodes. The active surface of the device has been functionalized with an electrochemical pH transducer (polyaniline). We easily tune the device features...... by combining lithographic techniques with electrochemical synthesis. The reported electrical LC resonance measurements show considerable sensitivity enhancement compared to conventional capacitive pH sensors realized with microfabricated interdigited electrodes. The sensitivity can be easily improved...

  16. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  17. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  18. Predicting the fate of micropollutants during wastewater treatment: Calibration and sensitivity analysis.

    Science.gov (United States)

    Baalbaki, Zeina; Torfs, Elena; Yargeau, Viviane; Vanrolleghem, Peter A

    2017-12-01

    The presence of micropollutants in the environment and their toxic impacts on the aquatic environment have raised concern about their inefficient removal in wastewater treatment plants. In this study, the fate of micropollutants of four different classes was simulated in a conventional activated sludge plant using a bioreactor micropollutant fate model coupled to a settler model. The latter was based on the Bürger-Diehl model extended for the first time to include micropollutant fate processes. Calibration of model parameters was completed by matching modelling results with full-scale measurements (i.e. including aqueous and particulate phase concentrations of micropollutants) obtained from a 4-day sampling campaign. Modelling results showed that further biodegradation takes place in the sludge blanket of the settler for the highly biodegradable caffeine, underlining the need for a reactive settler model. The adopted Monte Carlo based calibration approach also provided an overview of the model's global sensitivity to the parameters. This analysis showed that for each micropollutant and according to the dominant fate process, a different set of one or more parameters had a significant impact on the model fit, justifying the selection of parameter subsets for model calibration. A dynamic local sensitivity analysis was also performed with the calibrated parameters. This analysis supported the conclusions from the global sensitivity and provided guidance for future sampling campaigns. This study expands the understanding of micropollutant fate models when applied to different micropollutants, in terms of global and local sensitivity to model parameters, as well as the identifiability of the parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Sensitive Spectroscopic Analysis of Biomarkers in Exhaled Breath

    Science.gov (United States)

    Bicer, A.; Bounds, J.; Zhu, F.; Kolomenskii, A. A.; Kaya, N.; Aluauee, E.; Amani, M.; Schuessler, H. A.

    2018-06-01

    We have developed a novel optical setup which is based on a high finesse cavity and absorption laser spectroscopy in the near-IR spectral region. In pilot experiments, spectrally resolved absorption measurements of biomarkers in exhaled breath, such as methane and acetone, were carried out using cavity ring-down spectroscopy (CRDS). With a 172-cm-long cavity, an efficient optical path of 132 km was achieved. The CRDS technique is well suited for such measurements due to its high sensitivity and good spectral resolution. The detection limits for methane of 8 ppbv and acetone of 2.1 ppbv with spectral sampling of 0.005 cm-1 were achieved, which allowed to analyze multicomponent gas mixtures and to observe absorption peaks of 12CH4 and 13CH4. Further improvements of the technique have the potential to realize diagnostics of health conditions based on a multicomponent analysis of breath samples.

  20. Chemically Designed Metallic/Insulating Hybrid Nanostructures with Silver Nanocrystals for Highly Sensitive Wearable Pressure Sensors.

    Science.gov (United States)

    Kim, Haneun; Lee, Seung-Wook; Joh, Hyungmok; Seong, Mingi; Lee, Woo Seok; Kang, Min Su; Pyo, Jun Beom; Oh, Soong Ju

    2018-01-10

    With the increase in interest in wearable tactile pressure sensors for e-skin, researches to make nanostructures to achieve high sensitivity have been actively conducted. However, limitations such as complex fabrication processes using expensive equipment still exist. Herein, simple lithography-free techniques to develop pyramid-like metal/insulator hybrid nanostructures utilizing nanocrystals (NCs) are demonstrated. Ligand-exchanged and unexchanged silver NC thin films are used as metallic and insulating components, respectively. The interfaces of each NC layer are chemically engineered to create discontinuous insulating layers, i.e., spacers for improved sensitivity, and eventually to realize fully solution-processed pressure sensors. Device performance analysis with structural, chemical, and electronic characterization and conductive atomic force microscopy study reveals that hybrid nanostructure based pressure sensor shows an enhanced sensitivity of higher than 500 kPa -1 , reliability, and low power consumption with a wide range of pressure sensing. Nano-/micro-hierarchical structures are also designed by combining hybrid nanostructures with conventional microstructures, exhibiting further enhanced sensing range and achieving a record sensitivity of 2.72 × 10 4 kPa -1 . Finally, all-solution-processed pressure sensor arrays with high pixel density, capable of detecting delicate signals with high spatial selectivity much better than the human tactile threshold, are introduced.

  1. Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics

    Energy Technology Data Exchange (ETDEWEB)

    Tsourtis, Anastasios, E-mail: tsourtis@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, Crete (Greece); Pantazis, Yannis, E-mail: pantazis@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Harmandaris, Vagelis, E-mail: harman@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, and Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas (FORTH), GR-70013 Heraklion, Crete (Greece)

    2015-07-07

    In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.

  2. Testing of the derivative method and Kruskal-Wallis technique for sensitivity analysis of SYVAC

    International Nuclear Information System (INIS)

    Prust, J.O.; Edwards, H.H.

    1985-04-01

    The Kruskal-Wallis method of one-way analysis of variance by ranks has proved successful in identifying input parameters which have an important influence on dose. This technique was extended to test for first order interactions between parameters. In view of a number of practical difficulties and the computing resources required to carry out a large number of runs, this test is not recommended for detecting interactions between parameters. The derivative method of sensitivity analysis examines the partial derivative values of each input parameter with dose at various points across the parameter range. Important input parameters are associated with high derivatives and the results agreed well with previous sensitivity studies. The derivative values also provided information on the data generation distributions to be used for the input parameters in order to concentrate sampling in the high dose region of the parameter space to improve the sampling efficiency. Furthermore, the derivative values provided information on parameter interactions, the feasibility of developing a high dose algorithm and formed the basis for developing a regression equation. (author)

  3. Sensitivity of the IceCube detector for ultra-high energy electron neutrino events

    International Nuclear Information System (INIS)

    Voigt, Bernhard

    2008-01-01

    IceCube is a neutrino telescope currently under construction in the glacial ice at South Pole. At the moment half of the detector is installed, when completed it will instrument 1 km 3 of ice providing a unique experimental setup to detect high energy neutrinos from astrophysical sources. In this work the sensitivity of the complete IceCube detector for a diffuse electron-neutrino flux is analyzed, with a focus on energies above 1 PeV. Emphasis is put on the correct simulation of the energy deposit of electromagnetic cascades from charged-current electron-neutrino interactions. Since existing parameterizations lack the description of suppression effects at high energies, a simulation of the energy deposit of electromagnetic cascades with energies above 1 PeV is developed, including cross sections which account for the LPM suppression of bremsstrahlung and pair creation. An attempt is made to reconstruct the direction of these elongated showers. The analysis presented here makes use of the full charge waveform recorded with the data acquisition system of the IceCube detector. It introduces new methods to discriminate efficiently between the background of atmospheric muons, including muon bundles, and cascade signal events from electron-neutrino interactions. Within one year of operation of the complete detector a sensitivity of 1.5.10 -8 E -2 GeVs -1 sr -1 cm -2 is reached, which is valid for a diffuse electron neutrino flux proportional to E -2 in the energy range from 16 TeV to 13 PeV. Sensitivity is defined as the upper limit that could be set in absence of a signal at 90% confidence level. Including all neutrino flavors in this analysis, an improvement of at least one order of magnitude is expected, reaching the anticipated performance of a diffuse muon analysis. (orig.)

  4. Sensitivity of the IceCube detector for ultra-high energy electron neutrino events

    Energy Technology Data Exchange (ETDEWEB)

    Voigt, Bernhard

    2008-07-16

    IceCube is a neutrino telescope currently under construction in the glacial ice at South Pole. At the moment half of the detector is installed, when completed it will instrument 1 km{sup 3} of ice providing a unique experimental setup to detect high energy neutrinos from astrophysical sources. In this work the sensitivity of the complete IceCube detector for a diffuse electron-neutrino flux is analyzed, with a focus on energies above 1 PeV. Emphasis is put on the correct simulation of the energy deposit of electromagnetic cascades from charged-current electron-neutrino interactions. Since existing parameterizations lack the description of suppression effects at high energies, a simulation of the energy deposit of electromagnetic cascades with energies above 1 PeV is developed, including cross sections which account for the LPM suppression of bremsstrahlung and pair creation. An attempt is made to reconstruct the direction of these elongated showers. The analysis presented here makes use of the full charge waveform recorded with the data acquisition system of the IceCube detector. It introduces new methods to discriminate efficiently between the background of atmospheric muons, including muon bundles, and cascade signal events from electron-neutrino interactions. Within one year of operation of the complete detector a sensitivity of 1.5.10{sup -8}E{sup -2} GeVs{sup -1}sr{sup -1}cm{sup -2} is reached, which is valid for a diffuse electron neutrino flux proportional to E{sup -2} in the energy range from 16 TeV to 13 PeV. Sensitivity is defined as the upper limit that could be set in absence of a signal at 90% confidence level. Including all neutrino flavors in this analysis, an improvement of at least one order of magnitude is expected, reaching the anticipated performance of a diffuse muon analysis. (orig.)

  5. An introduction to sensitivity analysis for unobserved confounding in nonexperimental prevention research.

    Science.gov (United States)

    Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A

    2013-12-01

    Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.

  6. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  7. Supercritical extraction of oleaginous: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Santos M.M.

    2000-01-01

    Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.

  8. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  9. Phase sensitive spectral domain interferometry for label free biomolecular interaction analysis and biosensing applications

    Science.gov (United States)

    Chirvi, Sajal

    Biomolecular interaction analysis (BIA) plays vital role in wide variety of fields, which include biomedical research, pharmaceutical industry, medical diagnostics, and biotechnology industry. Study and quantification of interactions between natural biomolecules (proteins, enzymes, DNA) and artificially synthesized molecules (drugs) is routinely done using various labeled and label-free BIA techniques. Labeled BIA (Chemiluminescence, Fluorescence, Radioactive) techniques suffer from steric hindrance of labels on interaction site, difficulty of attaching labels to molecules, higher cost and time of assay development. Label free techniques with real time detection capabilities have demonstrated advantages over traditional labeled techniques. The gold standard for label free BIA is surface Plasmon resonance (SPR) that detects and quantifies the changes in refractive index of the ligand-analyte complex molecule with high sensitivity. Although SPR is a highly sensitive BIA technique, it requires custom-made sensor chips and is not well suited for highly multiplexed BIA required in high throughput applications. Moreover implementation of SPR on various biosensing platforms is limited. In this research work spectral domain phase sensitive interferometry (SD-PSI) has been developed for label-free BIA and biosensing applications to address limitations of SPR and other label free techniques. One distinct advantage of SD-PSI compared to other label-free techniques is that it does not require use of custom fabricated biosensor substrates. Laboratory grade, off-the-shelf glass or plastic substrates of suitable thickness with proper surface functionalization are used as biosensor chips. SD-PSI is tested on four separate BIA and biosensing platforms, which include multi-well plate, flow cell, fiber probe with integrated optics and fiber tip biosensor. Sensitivity of 33 ng/ml for anti-IgG is achieved using multi-well platform. Principle of coherence multiplexing for multi

  10. Sensitivity and uncertainty analysis applied to a repository in rock salt

    International Nuclear Information System (INIS)

    Polle, A.N.

    1996-12-01

    This document describes the sensitivity and uncertainty analysis with UNCSAM, as applied to a repository in rock salt for the EVEREST project. UNCSAM is a dedicated software package for sensitivity and uncertainty analysis, which was already used within the preceding PROSA project. The use of UNCSAM provides a flexible interface to EMOS ECN by substituting the sampled values in the various input files to be used by EMOS ECN ; the model calculations for this repository were performed with the EMOS ECN code. Preceding the sensitivity and uncertainty analysis, a number of preparations has been carried out to facilitate EMOS ECN with the probabilistic input data. For post-processing the EMOS ECN results, the characteristic output signals were processed. For the sensitivity and uncertainty analysis with UNCSAM the stochastic input, i.e. sampled values, and the output for the various EMOS ECN runs have been analyzed. (orig.)

  11. Aluminum nanocantilevers for high sensitivity mass sensors

    DEFF Research Database (Denmark)

    Davis, Zachary James; Boisen, Anja

    2005-01-01

    We have fabricated Al nanocantilevers using a simple, one mask contact UV lithography technique with lateral and vertical dimensions under 500 and 100 nm, respectively. These devices are demonstrated as highly sensitive mass sensors by measuring their dynamic properties. Furthermore, it is shown ...

  12. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  13. Sensitivity analysis for modules for various biosphere types

    International Nuclear Information System (INIS)

    Karlsson, Sara; Bergstroem, U.; Rosen, K.

    2000-09-01

    This study presents the results of a sensitivity analysis for the modules developed earlier for calculation of ecosystem specific dose conversion factors (EDFs). The report also includes a comparison between the probabilistically calculated mean values of the EDFs and values gained in deterministic calculations. An overview of the distribution of radionuclides between different environmental parts in the models is also presented. The radionuclides included in the study were 36 Cl, 59 Ni, 93 Mo, 129 I, 135 Cs, 237 Np and 239 Pu, sel to represent various behaviour in the biosphere and some are of particular importance from the dose point of view. The deterministic and probabilistic EDFs showed a good agreement, for most nuclides and modules. Exceptions from this occurred if very skew distributions were used for parameters of importance for the results. Only a minor amount of the released radionuclides were present in the model compartments for all modules, except for the agricultural land module. The differences between the radionuclides were not pronounced which indicates that nuclide specific parameters were of minor importance for the retention of radionuclides for the simulated time period of 10 000 years in those modules. The results from the agricultural land module showed a different pattern. Large amounts of the radionuclides were present in the solid fraction of the saturated soil zone. The high retention within this compartment makes the zone a potential source for future exposure. Differences between the nuclides due to element specific Kd-values could be seen. The amount of radionuclides present in the upper soil layer, which is the most critical zone for exposure to humans, was less then 1% for all studied radionuclides. The sensitivity analysis showed that the physical/chemical parameters were the most important in most modules in contrast to the dominance of biological parameters in the uncertainty analysis. The only exception was the well module where

  14. The Pajarito Monitor: a high-sensitivity monitoring system for highly enriched uranium

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.; Garcia, C.; Martinez, J.

    1984-01-01

    The Pajarito Monitor for Special Nuclear Material is a high-sensitivity gamma-ray monitoring system for detecting small quantities of highly enriched uranium transported by pedestrians or motor vehicles. The monitor consists of two components: a walk-through personnel monitor and a vehicle monitor. The personnel monitor has a plastic-scintillator detector portal, a microwave occupancy monitor, and a microprocessor control unit that measures the radiation intensity during background and monitoring periods to detect transient diversion signals. The vehicle monitor examines stationary motor vehicles while the vehicle's occupants pass through the personnel portal to exchange their badges. The vehicle monitor has four groups of large plastic scintillators that scan the vehicle from above and below. Its microprocessor control unit measures separate radiation intensities in each detector group. Vehicle occupancy is sensed by a highway traffic detection system. Each monitor's controller is responsible for detecting diversion as well as serving as a calibration and trouble-shooting aid. Diversion signals are detected by a sequential probability ratio hypothesis test that minimizes the monitoring time in the vehicle monitor and adapts itself well to variations in individual passage speed in the personnel monitor. Designed to be highly sensitive to diverted enriched uranium, the monitoring system also exhibits exceptional sensitivity for plutonium

  15. An Underwater Acoustic Vector Sensor with High Sensitivity and Broad Band

    Directory of Open Access Journals (Sweden)

    Hu Zhang

    2014-05-01

    Full Text Available Recently, acoustic vector sensor that use accelerators as sensing elements are widely used in underwater acoustic engineering, but the sensitivity of which at low frequency band is usually lower than -220 dB. In this paper, using a piezoelectric trilaminar optimized low frequency sensing element, we designed a high sensitivity internal placed ICP piezoelectric accelerometer as sensing element. Through structure optimization, we made a high sensitivity, broadband, small scale vector sensor. The working band is 10-2000 Hz, sound pressure sensitivity is -185 dB (at 100 Hz, outer diameter is 42 mm, length is 80 mm.

  16. Sensitivity analysis of dynamic characteristic of the fixture based on design variables

    International Nuclear Information System (INIS)

    Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa

    2002-01-01

    The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures

  17. Justification of investment projects of biogas systems by the sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Perebijnos Vasilij Ivanovich

    2015-06-01

    Full Text Available Methodical features of sensitivity analysis application for evaluation of biogas plants investment projects are shown in the article. Risk factors of the indicated investment projects have been studied. Methodical basis for the use of sensitivity analysis and calculation of elasticity coefficient has been worked out. Calculation of sensitivity analysis and elasticity coefficient of three biogas plants projects, which differ in direction of biogas transformation: use in co-generation plant, application of biomethane as motor fuel and resulting carbon dioxide as marketable product, has been made. Factors strongly affecting projects efficiency have been revealed.

  18. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  19. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  20. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  1. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  2. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  3. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  4. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  5. System reliability assessment via sensitivity analysis in the Markov chain scheme

    International Nuclear Information System (INIS)

    Gandini, A.

    1988-01-01

    Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given [fr

  6. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    Science.gov (United States)

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern

  7. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  8. High-intensity xenon plasma discharge lamp for bulk-sensitive high-resolution photoemission spectroscopy.

    Science.gov (United States)

    Souma, S; Sato, T; Takahashi, T; Baltzer, P

    2007-12-01

    We have developed a highly brilliant xenon (Xe) discharge lamp operated by microwave-induced electron cyclotron resonance (ECR) for ultrahigh-resolution bulk-sensitive photoemission spectroscopy (PES). We observed at least eight strong radiation lines from neutral or singly ionized Xe atoms in the energy region of 8.4-10.7 eV. The photon flux of the strongest Xe I resonance line at 8.437 eV is comparable to that of the He Ialpha line (21.218 eV) from the He-ECR discharge lamp. Stable operation for more than 300 h is achieved by efficient air-cooling of a ceramic tube in the resonance cavity. The high bulk sensitivity and high-energy resolution of PES using the Xe lines are demonstrated for some typical materials.

  9. Water-Sensitivity Characteristics of Briquettes Made from High-Rank Coal

    Directory of Open Access Journals (Sweden)

    Geng Yunguang

    2016-01-01

    Full Text Available In order to study the water sensitivity characteristics of the coalbed methane (CBM reservoir in the southern Qinshui Basin, the scanning electron microscopy, mineral composition and the water sensitivity of main coalbed 3 cores were tested and analyzed. Because CBM reservoirs in this area are characterized by low porosity and low permeability, the common water sensitivity experiment of cores can’t be used, instead, the briquettes were chose for the test to analysis the water sensitivity of CBM reservoirs. Results show that: the degree of water sensitivity in the study area varies from week to moderate. The controlling factors of water sensitivity are clay mineral content and the occurrence type of clay minerals, permeability and liquid flow rate. The water sensitivity damage rate is positively correlated with clay mineral content and liquid flow rate, and is negatively correlated with core permeability. The water sensitivity of CBM reservoir exist two damage mechanisms, including static permeability decline caused by clay mineral hydration dilatation and dynamic permeability decline caused by dispersion/migration of clay minerals.

  10. A global sensitivity analysis of crop virtual water content

    Science.gov (United States)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  11. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  12. Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance

    Science.gov (United States)

    Kato, H.; Ito, K.

    2009-01-01

    A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.

  13. Rapid, simple, and highly sensitive analysis of drugs in biological samples using thin-layer chromatography coupled with matrix-assisted laser desorption/ionization mass spectrometry.

    Science.gov (United States)

    Kuwayama, Kenji; Tsujikawa, Kenji; Miyaguchi, Hajime; Kanamori, Tatsuyuki; Iwata, Yuko T; Inoue, Hiroyuki

    2012-01-01

    Rapid and precise identification of toxic substances is necessary for urgent diagnosis and treatment of poisoning cases and for establishing the cause of death in postmortem examinations. However, identification of compounds in biological samples using gas chromatography and liquid chromatography coupled with mass spectrometry entails time-consuming and labor-intensive sample preparations. In this study, we examined a simple preparation and highly sensitive analysis of drugs in biological samples such as urine, plasma, and organs using thin-layer chromatography coupled with matrix-assisted laser desorption/ionization mass spectrometry (TLC/MALDI/MS). When the urine containing 3,4-methylenedioxymethamphetamine (MDMA) without sample dilution was spotted on a thin-layer chromatography (TLC) plate and was analyzed by TLC/MALDI/MS, the detection limit of the MDMA spot was 0.05 ng/spot. The value was the same as that in aqueous solution spotted on a stainless steel plate. All the 11 psychotropic compounds tested (MDMA, 4-hydroxy-3-methoxymethamphetamine, 3,4-methylenedioxyamphetamine, methamphetamine, p-hydroxymethamphetamine, amphetamine, ketamine, caffeine, chlorpromazine, triazolam, and morphine) on a TLC plate were detected at levels of 0.05-5 ng, and the type (layer thickness and fluorescence) of TLC plate did not affect detection sensitivity. In addition, when rat liver homogenate obtained after MDMA administration (10 mg/kg) was spotted on a TLC plate, MDMA and its main metabolites were identified using TLC/MALDI/MS, and the spots on a TLC plate were visualized by MALDI/imaging MS. The total analytical time from spotting of intact biological samples to the output of analytical results was within 30 min. TLC/MALDI/MS enabled rapid, simple, and highly sensitive analysis of drugs from intact biological samples and crude extracts. Accordingly, this method could be applied to rapid drug screening and precise identification of toxic substances in poisoning cases and

  14. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  15. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  16. Development of high sensitivity and high speed large size blank inspection system LBIS

    Science.gov (United States)

    Ohara, Shinobu; Yoshida, Akinori; Hirai, Mitsuo; Kato, Takenori; Moriizumi, Koichi; Kusunose, Haruhiko

    2017-07-01

    The production of high-resolution flat panel displays (FPDs) for mobile phones today requires the use of high-quality large-size photomasks (LSPMs). Organic light emitting diode (OLED) displays use several transistors on each pixel for precise current control and, as such, the mask patterns for OLED displays are denser and finer than the patterns for the previous generation displays throughout the entire mask surface. It is therefore strongly demanded that mask patterns be produced with high fidelity and free of defect. To enable the production of a high quality LSPM in a short lead time, the manufacturers need a high-sensitivity high-speed mask blank inspection system that meets the requirement of advanced LSPMs. Lasertec has developed a large-size blank inspection system called LBIS, which achieves high sensitivity based on a laser-scattering technique. LBIS employs a high power laser as its inspection light source. LBIS's delivery optics, including a scanner and F-Theta scan lens, focus the light from the source linearly on the surface of the blank. Its specially-designed optics collect the light scattered by particles and defects generated during the manufacturing process, such as scratches, on the surface and guide it to photo multiplier tubes (PMTs) with high efficiency. Multiple PMTs are used on LBIS for the stable detection of scattered light, which may be distributed at various angles due to irregular shapes of defects. LBIS captures 0.3mμ PSL at a detection rate of over 99.5% with uniform sensitivity. Its inspection time is 20 minutes for a G8 blank and 35 minutes for G10. The differential interference contrast (DIC) microscope on the inspection head of LBIS captures high-contrast review images after inspection. The images are classified automatically.

  17. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  18. Multimode fiber tip Fabry-Perot cavity for highly sensitive pressure measurement.

    Science.gov (United States)

    Chen, W P; Wang, D N; Xu, Ben; Zhao, C L; Chen, H F

    2017-03-23

    We demonstrate an optical Fabry-Perot interferometer fiber tip sensor based on an etched end of multimode fiber filled with ultraviolet adhesive. The fiber device is miniature (with diameter of less than 60 μm), robust and low cost, in a convenient reflection mode of operation, and has a very high gas pressure sensitivity of -40.94 nm/MPa, a large temperature sensitivity of 213 pm/°C within the range from 55 to 85 °C, and a relatively low temperature cross-sensitivity of 5.2 kPa/°C. This device has a high potential in monitoring environment of high pressure.

  19. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  20. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  1. Immune Profiles to Predict Response to Desensitization Therapy in Highly HLA-Sensitized Kidney Transplant Candidates.

    Science.gov (United States)

    Yabu, Julie M; Siebert, Janet C; Maecker, Holden T

    2016-01-01

    Kidney transplantation is the most effective treatment for end-stage kidney disease. Sensitization, the formation of human leukocyte antigen (HLA) antibodies, remains a major barrier to successful kidney transplantation. Despite the implementation of desensitization strategies, many candidates fail to respond. Current progress is hindered by the lack of biomarkers to predict response and to guide therapy. Our objective was to determine whether differences in immune and gene profiles may help identify which candidates will respond to desensitization therapy. Single-cell mass cytometry by time-of-flight (CyTOF) phenotyping, gene arrays, and phosphoepitope flow cytometry were performed in a study of 20 highly sensitized kidney transplant candidates undergoing desensitization therapy. Responders to desensitization therapy were defined as 5% or greater decrease in cumulative calculated panel reactive antibody (cPRA) levels, and non-responders had 0% decrease in cPRA. Using a decision tree analysis, we found that a combination of transitional B cell and regulatory T cell (Treg) frequencies at baseline before initiation of desensitization therapy could distinguish responders from non-responders. Using a support vector machine (SVM) and longitudinal data, TRAF3IP3 transcripts and HLA-DR-CD38+CD4+ T cells could also distinguish responders from non-responders. Combining all assays in a multivariate analysis and elastic net regression model with 72 analytes, we identified seven that were highly interrelated and eleven that predicted response to desensitization therapy. Measuring baseline and longitudinal immune and gene profiles could provide a useful strategy to distinguish responders from non-responders to desensitization therapy. This study presents the integration of novel translational studies including CyTOF immunophenotyping in a multivariate analysis model that has potential applications to predict response to desensitization, select candidates, and personalize

  2. Using sparse polynomial chaos expansions for the global sensitivity analysis of groundwater lifetime expectancy in a multi-layered hydrogeological model

    International Nuclear Information System (INIS)

    Deman, G.; Konakli, K.; Sudret, B.; Kerrou, J.; Perrochet, P.; Benabderrahmane, H.

    2016-01-01

    The study makes use of polynomial chaos expansions to compute Sobol' indices within the frame of a global sensitivity analysis of hydro-dispersive parameters in a simplified vertical cross-section of a segment of the subsurface of the Paris Basin. Applying conservative ranges, the uncertainty in 78 input variables is propagated upon the mean lifetime expectancy of water molecules departing from a specific location within a highly confining layer situated in the middle of the model domain. Lifetime expectancy is a hydrogeological performance measure pertinent to safety analysis with respect to subsurface contaminants, such as radionuclides. The sensitivity analysis indicates that the variability in the mean lifetime expectancy can be sufficiently explained by the uncertainty in the petrofacies, i.e. the sets of porosity and hydraulic conductivity, of only a few layers of the model. The obtained results provide guidance regarding the uncertainty modeling in future investigations employing detailed numerical models of the subsurface of the Paris Basin. Moreover, the study demonstrates the high efficiency of sparse polynomial chaos expansions in computing Sobol' indices for high-dimensional models. - Highlights: • Global sensitivity analysis of a 2D 15-layer groundwater flow model is conducted. • A high-dimensional random input comprising 78 parameters is considered. • The variability in the mean lifetime expectancy for the central layer is examined. • Sparse polynomial chaos expansions are used to compute Sobol' sensitivity indices. • The petrofacies of a few layers can sufficiently explain the response variance.

  3. A Fuel-Sensitive Reduced-Order Model (ROM) for Piston Engine Scaling Analysis

    Science.gov (United States)

    2017-09-29

    of high Reynolds number nonreacting and reacting JP-8 sprays in a constant pressure flow vessel with a detailed chemistry approach . J Energy Resour...for rapid grid generation applied to in-cylinder diesel engine simulations. Society of Automotive Engineers ; 2007 Apr. SAE Technical Paper No.: 2007...ARL-TR-8172 ● Sep 2017 US Army Research Laboratory A Fuel-Sensitive Reduced-Order Model (ROM) for Piston Engine Scaling Analysis

  4. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  5. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  6. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  7. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    International Nuclear Information System (INIS)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya

    2018-01-01

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  8. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya [KEPCO International Nuclear Graduate School (KINGS), Ulsan (Korea, Republic of). Dept. of NPP Engineering

    2018-02-15

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  9. High mass resolution time of flight mass spectrometer for measuring products in heterogeneous catalysis in highly sensitive microreactors

    DEFF Research Database (Denmark)

    Andersen, Thomas; Jensen, Robert; Christensen, M. K.

    2012-01-01

    We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal...

  10. Highly sensitive three-dimensional interdigitated microelectrode for microparticle detection using electrical impedance spectroscopy

    International Nuclear Information System (INIS)

    Chang, Fu-Yu; Chen, Ming-Kun; Jang, Ling-Sheng; Wang, Min-Haw

    2016-01-01

    Cell impedance analysis is widely used for monitoring biological and medical reactions. In this study, a highly sensitive three-dimensional (3D) interdigitated microelectrode (IME) with a high aspect ratio on a polyimide (PI) flexible substrate was fabricated for microparticle detection (e.g. cell quantity detection) using electroforming and lithography technology. 3D finite element simulations were performed to compare the performance of the 3D IME (in terms of sensitivity and signal-to-noise ratio) to that of a planar IME for particles in the sensing area. Various quantities of particles were captured in Dulbecco’s modified Eagle medium and their impedances were measured. With the 3D IME, the particles were arranged in the gap, not on the electrode, avoiding the noise due to particle position. For the maximum particle quantities, the results show that the 3D IME has at least 5-fold higher sensitivity than that of the planar IME. The trends of impedance magnitude and phase due to particle quantity were verified using the equivalent circuit model. The impedance (1269 Ω) of 69 particles was used to estimate the particle quantity (68 particles) with 98.6% accuracy using a parabolic regression curve at 500 kHz. (paper)

  11. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  12. Analysis of ultra-high sensitivity configuration in chip-integrated photonic crystal microcavity bio-sensors

    International Nuclear Information System (INIS)

    Chakravarty, Swapnajit; Hosseini, Amir; Xu, Xiaochuan; Zhu, Liang; Zou, Yi; Chen, Ray T.

    2014-01-01

    We analyze the contributions of quality factor, fill fraction, and group index of chip-integrated resonance microcavity devices, to the detection limit for bulk chemical sensing and the minimum detectable biomolecule concentration in biosensing. We analyze the contributions from analyte absorbance, as well as from temperature and spectral noise. Slow light in two-dimensional photonic crystals provide opportunities for significant reduction of the detection limit below 1 × 10 −7 RIU (refractive index unit) which can enable highly sensitive sensors in diverse application areas. We demonstrate experimentally detected concentration of 1 fM (67 fg/ml) for the binding between biotin and avidin, the lowest reported till date

  13. Achieving sensitive, high-resolution laser spectroscopy at CRIS

    Energy Technology Data Exchange (ETDEWEB)

    Groote, R. P. de [Instituut voor Kern- en Stralingsfysica, KU Leuven (Belgium); Lynch, K. M., E-mail: kara.marie.lynch@cern.ch [EP Department, CERN, ISOLDE (Switzerland); Wilkins, S. G. [The University of Manchester, School of Physics and Astronomy (United Kingdom); Collaboration: the CRIS collaboration

    2017-11-15

    The Collinear Resonance Ionization Spectroscopy (CRIS) experiment, located at the ISOLDE facility, has recently performed high-resolution laser spectroscopy, with linewidths down to 20 MHz. In this article, we present the modifications to the beam line and the newly-installed laser systems that have made sensitive, high-resolution measurements possible. Highlights of recent experimental campaigns are presented.

  14. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  16. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  17. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, Hongbin

    2016-01-01

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis. A 2 × 2 fuel assembly model was developed and simulated by VERA-CS, and uncertainty quantification and sensitivity analysis were performed with fourteen uncertain input parameters. The minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. Parameters used as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  18. Ultra-sensitive high performance liquid chromatography-laser-induced fluorescence based proteomics for clinical applications.

    Science.gov (United States)

    Patil, Ajeetkumar; Bhat, Sujatha; Pai, Keerthilatha M; Rai, Lavanya; Kartha, V B; Chidangil, Santhosh

    2015-09-08

    An ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique has been developed by our group at Manipal, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from volunteers (normal, and different pre-malignant/malignant conditions) were recorded using this set-up. The protein profiles were analyzed using principal component analysis (PCA) to achieve objective detection and classification of malignant, premalignant and healthy conditions with high sensitivity and specificity. The HPLC-LIF protein profiling combined with PCA, as a routine method for screening, diagnosis, and staging of cervical cancer and oral cancer, is discussed in this paper. In recent years, proteomics techniques have advanced tremendously in life sciences and medical sciences for the detection and identification of proteins in body fluids, tissue homogenates and cellular samples to understand biochemical mechanisms leading to different diseases. Some of the methods include techniques like high performance liquid chromatography, 2D-gel electrophoresis, MALDI-TOF-MS, SELDI-TOF-MS, CE-MS and LC-MS techniques. We have developed an ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from healthy and volunteers with different malignant conditions were recorded by using this set-up. The protein profile data were analyzed using principal component analysis (PCA) for objective

  19. Neutron activation analysis of high purity substances

    International Nuclear Information System (INIS)

    Gil'bert, Eh.N.

    1987-01-01

    Peculiarities of neutron-activation analysis (NAA) of high purity substances are considered. Simultaneous determination of a wide series of elements, high sensitivity (the lower bound of determined contents 10 -9 -10 -10 %), high selectivity and accuracy (Sr=0.10-0.15, and may be decreased up to 0.001), possibility of analysis of the samples from several micrograms to hundreds of grams, simplicity of calibration may be thought NAA advantages. Questions of accounting of NAA systematic errors associated with the neutron flux screening by the analysed matrix and with production of radionuclides of determined elements from accompanying elements according to concurrent nuclear reactions, as well as accounting of errors due to self-absorption of recorded radiation by compact samples, are considered

  20. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  1. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  2. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  3. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  4. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  5. Highly efficient and stable cyclometalated ruthenium(II) complexes as sensitizers for dye-sensitized solar cells

    International Nuclear Information System (INIS)

    Huang, Jian-Feng; Liu, Jun-Min; Su, Pei-Yang; Chen, Yi-Fan; Shen, Yong; Xiao, Li-Min; Kuang, Dai-Bin; Su, Cheng-Yong

    2015-01-01

    Highlights: • Four novel thiocyanate-free cyclometalated ruthenium sensitizer were conveniently synthesized. • The D-CF 3 -sensitized DSSCs show higher efficiency compared to N719 based cells. • The DSSCs based on D-CF 3 and D-bisCF 3 sensitizers exhibit excellent long-term stability. • The diverse cyclometalated Ru complexes can be developed as high-performance sensitizers for use in DSSC. - Abstract: Four novel thiocyanate-free cyclometallted Ru(II) complexes, D-bisCF 3 , D-CF 3 , D-OMe, and D-DPA, with two 4,4′-dicarboxylic acid-2,2′-bipyridine together with a functionalized phenylpyridine ancillary ligand, have been designed and synthesized. The effect of different substituents (R = bisCF 3 , CF 3 , OMe, and DPA) on the ancillary C^N ligand on the photophysical properties and photovoltaic performance is investigated. Under standard global AM 1.5 solar conditions, the device based on D-CF 3 sensitizer gives a higher conversion efficiency of 8.74% than those based on D-bisCF 3 , D-OMe, and D-DPA, which can be ascribed to its broad range of visible light absorption, appropriate localization of the frontier orbitals, weak hydrogen bonds between -CF 3 and -OH groups at the TiO 2 surface, moderate dye loading on TiO 2 , and high charge collection efficiency. Moreover, the D-bisCF 3 and D-CF 3 based DSSCs exhibit good stability under 100 mW cm −2 light soaking at 60 °C for 400 h

  6. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  7. Sensitivity analysis of water consumption in an office building

    Science.gov (United States)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  8. An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2004-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a

  9. Sensitivity analysis in the WWTP modelling community – new opportunities and applications

    DEFF Research Database (Denmark)

    Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.

    2010-01-01

    design (BSM1 plant layout) using Standardized Regression Coefficients (SRC) and (ii) Applying sensitivity analysis to help fine-tuning a fuzzy controller for a BNPR plant using Morris Screening. The results obtained from each case study are then critically discussed in view of practical applications......A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...

  10. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  11. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  12. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  13. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  14. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  15. Highly sensitive nano-porous lattice biosensor based on localized surface plasmon resonance and interference.

    Science.gov (United States)

    Yeom, Se-Hyuk; Kim, Ok-Geun; Kang, Byoung-Ho; Kim, Kyu-Jin; Yuan, Heng; Kwon, Dae-Hyuk; Kim, Hak-Rin; Kang, Shin-Won

    2011-11-07

    We propose a design for a highly sensitive biosensor based on nanostructured anodized aluminum oxide (AAO) substrates. A gold-deposited AAO substrate exhibits both optical interference and localized surface plasmon resonance (LSPR). In our sensor, application of these disparate optical properties overcomes problems of limited sensitivity, selectivity, and dynamic range seen in similar biosensors. We fabricated uniform periodic nanopore lattice AAO templates by two-step anodizing and assessed their suitability for application in biosensors by characterizing the change in optical response on addition of biomolecules to the AAO template. To determine the suitability of such structures for biosensing applications, we immobilized a layer of C-reactive protein (CRP) antibody on a gold coating atop an AAO template. We then applied a CRP antigen (Ag) atop the immobilized antibody (Ab) layer. The shift in reflectance is interpreted as being caused by the change in refractive index with membrane thickness. Our results confirm that our proposed AAO-based biosensor is highly selective toward detection of CRP antigen, and can measure a change in CRP antigen concentration of 1 fg/ml. This method can provide a simple, fast, and sensitive analysis for protein detection in real-time.

  16. An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    S. Razmyan

    2012-01-01

    Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.

  17. Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.

    Science.gov (United States)

    Núñez, M; Vlachos, D G

    2015-01-28

    Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.

  18. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  19. Applying cost-sensitive classification for financial fraud detection under high class-imbalance

    CSIR Research Space (South Africa)

    Moepya, SO

    2014-12-01

    Full Text Available , sensitivity, specificity, recall and precision using PCA and Factor Analysis. Weighted Support Vector Machines (SVM) were shown superior to the cost-sensitive Naive Bayes (NB) and K-Nearest Neighbors classifiers....

  20. Sensitivity analysis using two-dimensional models of the Whiteshell geosphere

    Energy Technology Data Exchange (ETDEWEB)

    Scheier, N. W.; Chan, T.; Stanchell, F. W.

    1992-12-01

    As part of the assessment of the environmental impact of disposing of immobilized nuclear fuel waste in a vault deep within plutonic rock, detailed modelling of groundwater flow, heat transport and containment transport through the geosphere is being performed using the MOTIF finite-element computer code. The first geosphere model is being developed using data from the Whiteshell Research Area, with a hypothetical disposal vault at a depth of 500 m. This report briefly describes the conceptual model and then describes in detail the two-dimensional simulations used to help initially define an adequate three-dimensional representation, select a suitable form for the simplified model to be used in the overall systems assessment with the SYVAC computer code, and perform some sensitivity analysis. The sensitivity analysis considers variations in the rock layer properties, variations in fracture zone configurations, the impact of grouting a vault/fracture zone intersection, and variations in boundary conditions. This study shows that the configuration of major fracture zones can have a major influence on groundwater flow patterns. The flows in the major fracture zones can have high velocities and large volumes. The proximity of the radionuclide source to a major fracture zone may strongly influence the time it takes for a radionuclide to be transported to the surface. (auth)

  1. HF Propagation sensitivity study and system performance analysis with the Air Force Coverage Analysis Program (AFCAP)

    Science.gov (United States)

    Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.

    2017-12-01

    The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).

  2. An Introduction to Sensitivity Analysis for Unobserved Confounding in Non-Experimental Prevention Research

    Science.gov (United States)

    Kuramoto, S. Janet; Stuart, Elizabeth A.

    2013-01-01

    Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282

  3. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  4. A comparison of sorptive extraction techniques coupled to a new quantitative, sensitive, high throughput GC-MS/MS method for methoxypyrazine analysis in wine.

    Science.gov (United States)

    Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E

    2016-02-01

    Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program. Copyright © 2015

  5. Global sensitivity analysis using emulators, with an example analysis of large fire plumes based on FDS simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kelsey, Adrian [Health and Safety Laboratory, Harpur Hill, Buxton (United Kingdom)

    2015-12-15

    Uncertainty in model predictions of the behaviour of fires is an important issue in fire safety analysis in nuclear power plants. A global sensitivity analysis can help identify the input parameters or sub-models that have the most significant effect on model predictions. However, to perform a global sensitivity analysis using Monte Carlo sampling might require thousands of simulations to be performed and therefore would not be practical for an analysis based on a complex fire code using computational fluid dynamics (CFD). An alternative approach is to perform a global sensitivity analysis using an emulator. Gaussian process emulators can be built using a limited number of simulations and once built a global sensitivity analysis can be performed on an emulator, rather than using simulations directly. Typically reliable emulators can be built using ten simulations for each parameter under consideration, therefore allowing a global sensitivity analysis to be performed, even for a complex computer code. In this paper we use an example of a large scale pool fire to demonstrate an emulator based approach to global sensitivity analysis. In that work an emulator based global sensitivity analysis was used to identify the key uncertain model inputs affecting the entrainment rates and flame heights in large Liquefied Natural Gas (LNG) fire plumes. The pool fire simulations were performed using the Fire Dynamics Simulator (FDS) software. Five model inputs were varied: the fire diameter, burn rate, radiative fraction, computational grid cell size and choice of turbulence model. The ranges used for these parameters in the analysis were determined from experiment and literature. The Gaussian process emulators used in the analysis were created using 127 FDS simulations. The emulators were checked for reliability, and then used to perform a global sensitivity analysis and uncertainty analysis. Large-scale ignited releases of LNG on water were performed by Sandia National

  6. A comprehensive sensitivity and uncertainty analysis of a milk drying process

    DEFF Research Database (Denmark)

    Ferrari, A.; Gutiérrez, S.; Sin, G.

    2015-01-01

    A simple steady state model of a milk drying process was built to help process understanding. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a statistical analysis for quality assurance using sensitivity analysis (SA) of inputs/parameters, identifiab......A simple steady state model of a milk drying process was built to help process understanding. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a statistical analysis for quality assurance using sensitivity analysis (SA) of inputs...... technique. SA results provide evidence towards over-parameterization in the model, and the chamber inlet dry bulb air temperature was the variable (input) with the highest sensitivity. IA results indicated that at most 4 parameters are identifiable: two from spray chamber and one from each fluid bed dryer...

  7. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  8. Analysis of leachability for a sandstone uranium deposite with high acid consumption and sensitivities in Inner Mongolia

    International Nuclear Information System (INIS)

    Cheng Wei; Miao Aisheng; Li Jianhua; Zhou Lei; Chang Jingtao

    2014-01-01

    In-situ Leaching adaptability of a ground water oxidation zone type sandstone uranium deposit from Inner Mongolia is studied. The ore of the uranium deposit has high acid consumption and sensitivities in in-situ leaching. The leaching process with agent of CO_2 + O_2 and adjusting concentration of HCO_3"- can be suitable for the deposit. (authors)

  9. Are inflationary predictions sensitive to very high energy physics?

    International Nuclear Information System (INIS)

    Burgess, C.P.; Lemieux, F.; Holman, R.; Cline, J.M.

    2003-01-01

    It has been proposed that the successful inflationary description of density perturbations on cosmological scales is sensitive to the details of physics at extremely high (trans-Planckian) energies. We test this proposal by examining how inflationary predictions depend on higher-energy scales within a simple model where the higher-energy physics is well understood. We find the best of all possible worlds: inflationary predictions are robust against the vast majority of high-energy effects, but can be sensitive to some effects in certain circumstances, in a way which does not violate ordinary notions of decoupling. This implies both that the comparison of inflationary predictions with CMB data is meaningful, and that it is also worth searching for small deviations from the standard results in the hopes of learning about very high energies. (author)

  10. Analysis of hepatitis B surface antigen (HBsAg) using high-sensitivity HBsAg assays in hepatitis B virus carriers in whom HBsAg seroclearance was confirmed by conventional assays.

    Science.gov (United States)

    Ozeki, Itaru; Nakajima, Tomoaki; Suii, Hirokazu; Tatsumi, Ryoji; Yamaguchi, Masakatsu; Kimura, Mutsuumi; Arakawa, Tomohiro; Kuwata, Yasuaki; Ohmura, Takumi; Hige, Shuhei; Karino, Yoshiyasu; Toyota, Joji

    2018-02-01

    We investigated the utility of high-sensitivity hepatitis B surface antigen (HBsAg) assays compared with conventional HBsAg assays. Using serum samples from 114 hepatitis B virus (HBV) carriers in whom HBsAg seroclearance was confirmed by conventional HBsAg assays (cut-off value, 0.05 IU/mL), the amount of HBsAg was re-examined by high-sensitivity HBsAg assays (cut-off value, 0.005 IU/mL). Cases negative for HBsAg in both assays were defined as consistent cases, and cases positive for HBsAg in the high-sensitivity HBsAg assay only were defined as discrepant cases. There were 55 (48.2%) discrepant cases, and the range of HBsAg titers determined by high-sensitivity HBsAg assays was 0.005-0.056 IU/mL. Multivariate analysis showed that the presence of nucleos(t)ide analog therapy, liver cirrhosis, and negative anti-HBs contributed to the discrepancies between the two assays. Cumulative anti-HBs positivity rates among discrepant cases were 12.7%, 17.2%, 38.8%, and 43.9% at baseline, 1 year, 3 years, and 5 years, respectively, whereas the corresponding rates among consistent cases were 50.8%, 56.0%, 61.7%, and 68.0%, respectively. Hepatitis B virus DNA negativity rates were 56.4% and 81.4% at baseline, 51.3% and 83.3% at 1 year, and 36.8% and 95.7% at 3 years, among discrepant and consistent cases, respectively. Hepatitis B surface antigen reversion was observed only in discrepant cases. Re-examination by high-sensitivity HBsAg assays revealed that HBsAg was positive in approximately 50% of cases. Cumulative anti-HBs seroconversion rates and HBV-DNA seroclearance rates were lower in these cases, suggesting a population at risk for HBsAg reversion. © 2017 The Japan Society of Hepatology.

  11. Desensitization protocol in highly HLA-sensitized and ABO-incompatible high titer kidney transplantation.

    Science.gov (United States)

    Uchida, J; Machida, Y; Iwai, T; Naganuma, T; Kitamoto, K; Iguchi, T; Maeda, S; Kamada, Y; Kuwabara, N; Kim, T; Nakatani, T

    2010-12-01

    A positive crossmatch indicates the presence of donor-specific alloantibodies and is associated with a graft loss rate of >80%; anti-ABO blood group antibodies develop in response to exposure to foreign blood groups, resulting in immediate graft loss. However, a desensitization protocol for highly HLA-sensitized and ABO-incompatible high-titer kidney transplantation has not yet been established. We treated 6 patients with high (≥1:512) anti-A/B antibody titers and 2 highly HLA-sensitized patients. Our immunosuppression protocol was initiated 1 month before surgery and included mycophenolate mofetil (1 g/d) and/or low-dose steroid (methylprednisolone 8 mg/d). Two doses of the anti-CD20 antibody rituximab (150 mg/m(2)) were administered 2 weeks before and on the day of transplantation. We performed antibody removal with 6-12 sessions of plasmapheresis (plasma exchange or double-filtration plasmapheresis) before transplantation. Splenectomy was also performed on the day of transplantation. Postoperative immunosuppression followed the same regimen as ABO-compatible cases, in which calcineurin inhibitors were initiated 3 days before transplantation, combined with 2 doses of basiliximab. Of the 8 patients, 7 subsequently underwent successful living-donor kidney transplantation. Follow-up of our recipients showed that the patient and graft survival rates were 100%. Acute cellular rejection and antibody-mediated rejection episodes occurred in 1 of the 7 recipients. These findings suggest that our immunosuppression regimen consisting of rituximab infusions, splenectomy, plasmapheresis, and pharmacologic immunosuppression may prove to be effective as a desensitization protocol for highly HLA-sensitized and ABO-incompatible high-titer kidney transplantation. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Deterministic sensitivity analysis of two-phase flow systems: forward and adjoint methods. Final report

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1984-07-01

    This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations

  13. High-throughput and sensitive analysis of 3-monochloropropane-1,2-diol fatty acid esters in edible oils by supercritical fluid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Hori, Katsuhito; Matsubara, Atsuki; Uchikata, Takato; Tsumura, Kazunobu; Fukusaki, Eiichiro; Bamba, Takeshi

    2012-08-10

    We have established a high-throughput and sensitive analytical method based on supercritical fluid chromatography (SFC) coupled with triple quadrupole mass spectrometry (QqQ MS) for 3-monochloropropane-1,2-diol (3-MCPD) fatty acid esters in edible oils. All analytes were successfully separated within 9 min without sample purification. The system was precise and sensitive, with a limit of detection less than 0.063 mg/kg. The recovery rate of 3-MCPD fatty acid esters spiked into oil samples was in the range of 62.68-115.23%. Furthermore, several edible oils were tested for analyzing 3-MCPD fatty acid ester profiles. This is the first report on the analysis of 3-MCPD fatty acid esters by SFC/QqQ MS. The developed method will be a powerful tool for investigating 3-MCPD fatty acid esters in edible oils. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Introducing AAA-MS, a rapid and sensitive method for amino acid analysis using isotope dilution and high-resolution mass spectrometry.

    Science.gov (United States)

    Louwagie, Mathilde; Kieffer-Jaquinod, Sylvie; Dupierris, Véronique; Couté, Yohann; Bruley, Christophe; Garin, Jérôme; Dupuis, Alain; Jaquinod, Michel; Brun, Virginie

    2012-07-06

    Accurate quantification of pure peptides and proteins is essential for biotechnology, clinical chemistry, proteomics, and systems biology. The reference method to quantify peptides and proteins is amino acid analysis (AAA). This consists of an acidic hydrolysis followed by chromatographic separation and spectrophotometric detection of amino acids. Although widely used, this method displays some limitations, in particular the need for large amounts of starting material. Driven by the need to quantify isotope-dilution standards used for absolute quantitative proteomics, particularly stable isotope-labeled (SIL) peptides and PSAQ proteins, we developed a new AAA assay (AAA-MS). This method requires neither derivatization nor chromatographic separation of amino acids. It is based on rapid microwave-assisted acidic hydrolysis followed by high-resolution mass spectrometry analysis of amino acids. Quantification is performed by comparing MS signals from labeled amino acids (SIL peptide- and PSAQ-derived) with those of unlabeled amino acids originating from co-hydrolyzed NIST standard reference materials. For both SIL peptides and PSAQ standards, AAA-MS quantification results were consistent with classical AAA measurements. Compared to AAA assay, AAA-MS was much faster and was 100-fold more sensitive for peptide and protein quantification. Finally, thanks to the development of a labeled protein standard, we also extended AAA-MS analysis to the quantification of unlabeled proteins.

  15. Integrated thermal and nonthermal treatment technology and subsystem cost sensitivity analysis

    International Nuclear Information System (INIS)

    Harvego, L.A.; Schafer, J.J.

    1997-02-01

    The U.S. Department of Energy's (DOE) Environmental Management Office of Science and Technology (EM-50) authorized studies on alternative systems for treating contact-handled DOE mixed low-level radioactive waste (MLLW). The on-going Integrated Thermal Treatment Systems' (ITTS) and the Integrated Nonthermal Treatment Systems' (INTS) studies satisfy this request. EM-50 further authorized supporting studies including this technology and subsystem cost sensitivity analysis. This analysis identifies areas where technology development could have the greatest impact on total life cycle system costs. These areas are determined by evaluating the sensitivity of system life cycle costs relative to changes in life cycle component or phase costs, subsystem costs, contingency allowance, facility capacity, operating life, and disposal costs. For all treatment systems, the most cost sensitive life cycle phase is the operations and maintenance phase and the most cost sensitive subsystem is the receiving and inspection/preparation subsystem. These conclusions were unchanged when the sensitivity analysis was repeated on a present value basis. Opportunity exists for technology development to reduce waste receiving and inspection/preparation costs by effectively minimizing labor costs, the major cost driver, within the maintenance and operations phase of the life cycle

  16. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    Science.gov (United States)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  18. Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool

    Science.gov (United States)

    Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.

    2018-06-01

    Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.

  19. Analysis of ultra-high sensitivity configuration in chip-integrated photonic crystal microcavity bio-sensors

    Energy Technology Data Exchange (ETDEWEB)

    Chakravarty, Swapnajit, E-mail: swapnajit.chakravarty@omegaoptics.com; Hosseini, Amir; Xu, Xiaochuan [Omega Optics, Inc., Austin, Texas 78757 (United States); Zhu, Liang; Zou, Yi [Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, Texas 78758 (United States); Chen, Ray T., E-mail: raychen@uts.cc.utexas.edu [Omega Optics, Inc., Austin, Texas 78757 (United States); Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, Texas 78758 (United States)

    2014-05-12

    We analyze the contributions of quality factor, fill fraction, and group index of chip-integrated resonance microcavity devices, to the detection limit for bulk chemical sensing and the minimum detectable biomolecule concentration in biosensing. We analyze the contributions from analyte absorbance, as well as from temperature and spectral noise. Slow light in two-dimensional photonic crystals provide opportunities for significant reduction of the detection limit below 1 × 10{sup −7} RIU (refractive index unit) which can enable highly sensitive sensors in diverse application areas. We demonstrate experimentally detected concentration of 1 fM (67 fg/ml) for the binding between biotin and avidin, the lowest reported till date.

  20. EV range sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)

    2010-07-01

    This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.

  1. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  2. An analysis of students’ cognitive structures in relation to their environmental sensitivity

    Directory of Open Access Journals (Sweden)

    Gercek Cem

    2017-01-01

    Full Text Available One of the basic aims of environment-related subjects included in the biology curriculum is to raise environmental awareness. Yet, the fact that the concepts taught especially in the context of environmental issues are abstract influences meaningful learning. This, in turn, influences behaviours displayed. Exhibiting students’ cognitive structures in assuring effective and meaningful concept teaching increases the importance of studies conducted. Unavailability of studies in the literature analysing students’ cognitive structures in relation to their sensitivity to the environment demonstrates the importance of this current study. This study aims to analyse students’ cognitive structures in relation to their environmental sensitivity. The study employs survey model- one of the qualitative research designs. The study group was composed of 56 high school students in the 2016-2017 academic year. The study group was formed through purposeful sampling. Word Association Test (WAT was prepared in order to uncover students’ cognitive structures in relation to their environmental sensitivity. Having transcribed the data obtained, they were put to content analysis and were transferred to the medium of computer. The results showed that students’ cognitive structure concerning their environmental sensitivity was divided into categories.

  3. Depleted Nanocrystal-Oxide Heterojunctions for High-Sensitivity Infrared Detection

    Science.gov (United States)

    2015-08-28

    Approved for Public Release; Distribution Unlimited Final Report: 4.3 Electronic Sensing - Depleted Nanocrystal- Oxide Heterojunctions for High...reviewed journals: Final Report: 4.3 Electronic Sensing - Depleted Nanocrystal- Oxide Heterojunctions for High-Sensitivity Infrared Detection Report Title...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: 1 1 Final Progress Report Project title: Depleted Nanocrystal- Oxide Heterojunctions for High

  4. Semianalytic Design Sensitivity Analysis of Nonlinear Structures With a Commercial Finite Element Package

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Yoo, Jung Hun; Choi, Hyeong Cheol

    2002-01-01

    A finite element package is often used as a daily design tool for engineering designers in order to analyze and improve the design. The finite element analysis can provide the responses of a system for given design variables. Although finite element analysis can quite well provide the structural behaviors for given design variables, it cannot provide enough information to improve the design such as design sensitivity coefficients. Design sensitivity analysis is an essential step to predict the change in responses due to a change in design variables and to optimize a system with the aid of the gradient-based optimization techniques. To develop a numerical method of design sensitivity analysis, analytical derivatives that are based on analytical differentiation of the continuous or discrete finite element equations are effective but analytical derivatives are difficult because of the lack of internal information of the commercial finite element package such as shape functions. Therefore, design sensitivity analysis outside of the finite element package is necessary for practical application in an industrial setting. In this paper, the semi-analytic method for design sensitivity analysis is used for the development of the design sensitivity module outside of a commercial finite element package of ANSYS. The direct differentiation method is employed to compute the design derivatives of the response and the pseudo-load for design sensitivity analysis is effectively evaluated by using the design variation of the related internal nodal forces. Especially, we suggest an effective method for stress and nonlinear design sensitivity analyses that is independent of the commercial finite element package is also discussed. Numerical examples are illustrated to show the accuracy and efficiency of the developed method and to provide insights for implementation of the suggested method into other commercial finite element packages

  5. Analysis of DNA methylation in Arabidopsis thaliana based on methylation-sensitive AFLP markers.

    Science.gov (United States)

    Cervera, M T; Ruiz-García, L; Martínez-Zapater, J M

    2002-12-01

    AFLP analysis using restriction enzyme isoschizomers that differ in their sensitivity to methylation of their recognition sites has been used to analyse the methylation state of anonymous CCGG sequences in Arabidopsis thaliana. The technique was modified to improve the quality of fingerprints and to visualise larger numbers of scorable fragments. Sequencing of amplified fragments indicated that detection was generally associated with non-methylation of the cytosine to which the isoschizomer is sensitive. Comparison of EcoRI/ HpaII and EcoRI/ MspI patterns in different ecotypes revealed that 35-43% of CCGG sites were differentially digested by the isoschizomers. Interestingly, the pattern of digestion among different plants belonging to the same ecotype is highly conserved, with the rate of intra-ecotype methylation-sensitive polymorphisms being less than 1%. However, pairwise comparisons of methylation patterns between samples belonging to different ecotypes revealed differences in up to 34% of the methylation-sensitive polymorphisms. The lack of correlation between inter-ecotype similarity matrices based on methylation-insensitive or methylation-sensitive polymorphisms suggests that whatever the mechanisms regulating methylation may be, they are not related to nucleotide sequence variation.

  6. Comprehensive RNA Analysis of the NF1 Gene in Classically Affected NF1 Affected Individuals Meeting NIH Criteria has High Sensitivity and Mutation Negative Testing is Reassuring in Isolated Cases With Pigmentary Features Only

    Directory of Open Access Journals (Sweden)

    D.G. Evans

    2016-05-01

    Interpretation: RNA analysis in individuals with presumed NF1 has high sensitivity and includes a small subset with DNET without an NF1 variant. Furthermore negative analysis for NF1/SPRED1 provides strong reassurance to children with ≥6 CAL that they are unlikely to have NF1.

  7. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    Energy Technology Data Exchange (ETDEWEB)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  8. Development of a highly sensitive and specific immunoassay for enrofloxacin based on heterologous coating haptens.

    Science.gov (United States)

    Wang, Zhanhui; Zhang, Huiyan; Ni, Hengjia; Zhang, Suxia; Shen, Jianzhong

    2014-04-11

    In the paper, an enzyme-linked immunosorbent immunoassay (ELISA) for detection of enrofloxacin was described using one new derivative of enrofloxacin as coating hapten, resulting in surprisingly high sensitivity and specificity. Incorporation of aminobutyric acid (AA) in the new derivative of enrofloxacin had decreased the IC50 of the ELISA for enrofloxacin from 1.3 μg L(-1) to as low as 0.07 μg L(-1). The assay showed neglect cross-reactivity for other fluoroquinolones but ofloxacin (8.23%), marbofloxacin (8.97%) and pefloxacin (7.29%). Analysis of enrofloxacin fortified chicken muscle showed average recoveries from 81 to 115%. The high sensitivity and specificity of the assay makes it a suitable screening method for the determination of low levels of enrofloxacin in chicken muscle without clean-up step. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  10. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  11. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  12. Automatic and integrated micro-enzyme assay (AIμEA) platform for highly sensitive thrombin analysis via an engineered fluorescence protein-functionalized monolithic capillary column.

    Science.gov (United States)

    Lin, Lihua; Liu, Shengquan; Nie, Zhou; Chen, Yingzhuang; Lei, Chunyang; Wang, Zhen; Yin, Chao; Hu, Huiping; Huang, Yan; Yao, Shouzhuo

    2015-04-21

    Nowadays, large-scale screening for enzyme discovery, engineering, and drug discovery processes require simple, fast, and sensitive enzyme activity assay platforms with high integration and potential for high-throughput detection. Herein, a novel automatic and integrated micro-enzyme assay (AIμEA) platform was proposed based on a unique microreaction system fabricated by a engineered green fluorescence protein (GFP)-functionalized monolithic capillary column, with thrombin as an example. The recombinant GFP probe was rationally engineered to possess a His-tag and a substrate sequence of thrombin, which enable it to be immobilized on the monolith via metal affinity binding, and to be released after thrombin digestion. Combined with capillary electrophoresis-laser-induced fluorescence (CE-LIF), all the procedures, including thrombin injection, online enzymatic digestion in the microreaction system, and label-free detection of the released GFP, were integrated in a single electrophoretic process. By taking advantage of the ultrahigh loading capacity of the AIμEA platform and the CE automatic programming setup, one microreaction column was sufficient for many times digestion without replacement. The novel microreaction system showed significantly enhanced catalytic efficiency, about 30 fold higher than that of the equivalent bulk reaction. Accordingly, the AIμEA platform was highly sensitive with a limit of detection down to 1 pM of thrombin. Moreover, the AIμEA platform was robust and reliable to detect thrombin in human serum samples and its inhibition by hirudin. Hence, this AIμEA platform exhibits great potential for high-throughput analysis in future biological application, disease diagnostics, and drug screening.

  13. Sensitivity analysis in Gaussian Bayesian networks using a symbolic-numerical technique

    International Nuclear Information System (INIS)

    Castillo, Enrique; Kjaerulff, Uffe

    2003-01-01

    The paper discusses the problem of sensitivity analysis in Gaussian Bayesian networks. The algebraic structure of the conditional means and variances, as rational functions involving linear and quadratic functions of the parameters, are used to simplify the sensitivity analysis. In particular the probabilities of conditional variables exceeding given values and related probabilities are analyzed. Two examples of application are used to illustrate all the concepts and methods

  14. Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2005-01-01

    This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning

  15. First order sensitivity analysis of flexible multibody systems using absolute nodal coordinate formulation

    International Nuclear Information System (INIS)

    Pi Ting; Zhang Yunqing; Chen Liping

    2012-01-01

    Design sensitivity analysis of flexible multibody systems is important in optimizing the performance of mechanical systems. The choice of coordinates to describe the motion of multibody systems has a great influence on the efficiency and accuracy of both the dynamic and sensitivity analysis. In the flexible multibody system dynamics, both the floating frame of reference formulation (FFRF) and absolute nodal coordinate formulation (ANCF) are frequently utilized to describe flexibility, however, only the former has been used in design sensitivity analysis. In this article, ANCF, which has been recently developed and focuses on modeling of beams and plates in large deformation problems, is extended into design sensitivity analysis of flexible multibody systems. The Motion equations of a constrained flexible multibody system are expressed as a set of index-3 differential algebraic equations (DAEs), in which the element elastic forces are defined using nonlinear strain-displacement relations. Both the direct differentiation method and adjoint variable method are performed to do sensitivity analysis and the related dynamic and sensitivity equations are integrated with HHT-I3 algorithm. In this paper, a new method to deduce system sensitivity equations is proposed. With this approach, the system sensitivity equations are constructed by assembling the element sensitivity equations with the help of invariant matrices, which results in the advantage that the complex symbolic differentiation of the dynamic equations is avoided when the flexible multibody system model is changed. Besides that, the dynamic and sensitivity equations formed with the proposed method can be efficiently integrated using HHT-I3 method, which makes the efficiency of the direct differentiation method comparable to that of the adjoint variable method when the number of design variables is not extremely large. All these improvements greatly enhance the application value of the direct differentiation

  16. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Nuclear data sensitivity/uncertainty analysis for XT-ADS

    International Nuclear Information System (INIS)

    Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert

    2011-01-01

    Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.

  18. Portable evanescent wave fiber biosensor for highly sensitive detection of Shigella

    Science.gov (United States)

    Xiao, Rui; Rong, Zhen; Long, Feng; Liu, Qiqi

    2014-11-01

    A portable evanescent wave fiber biosensor was developed to achieve the rapid and highly sensitive detection of Shigella. In this study, a DNA probe was covalently immobilized onto fiber-optic biosensors that can hybridize with a fluorescently labeled complementary DNA. The sensitivity of detection for synthesized oligonucleotides can reach 10-10 M. The surface of the sensor can be regenerated with 0.5% sodium dodecyl sulfate solution (pH 1.9) for over 30 times without significant deterioration of performance. The total analysis time for a single sample, including the time for measurement and surface regeneration, was less than 6 min. We employed real-time polymerase chain reaction (PCR) and compared the results of both methods to investigate the actual Shigella DNA detection capability of the fiber-optic biosensor. The fiber-optic biosensor could detect as low as 102 colony-forming unit/mL Shigella. This finding was comparable with that by real-time PCR, which suggests that this method is a potential alternative to existing detection methods.

  19. Antibody Desensitization Therapy in Highly Sensitized Lung Transplant Candidates

    Science.gov (United States)

    Snyder, L. D.; Gray, A. L.; Reynolds, J. M.; Arepally, G. M.; Bedoya, A.; Hartwig, M. G.; Davis, R. D.; Lopes, K. E.; Wegner, W. E.; Chen, D. F.; Palmer, S. M.

    2015-01-01

    As HLAs antibody detection technology has evolved, there is now detailed HLA antibody information available on prospective transplant recipients. Determining single antigen antibody specificity allows for a calculated panel reactive antibodies (cPRA) value, providing an estimate of the effective donor pool. For broadly sensitized lung transplant candidates (cPRA ≥ 80%), our center adopted a pretransplant multimodal desensitization protocol in an effort to decrease the cPRA and expand the donor pool. This desensitization protocol included plasmapheresis, solumedrol, bortezomib and rituximab given in combination over 19 days followed by intravenous immunoglobulin. Eight of 18 candidates completed therapy with the primary reasons for early discontinuation being transplant (by avoiding unacceptable antigens) or thrombocytopenia. In a mixed-model analysis, there were no significant changes in PRA or cPRA changes over time with the protocol. A sub-analysis of the median fluorescence intensity (MFI) change indicated a small decline that was significant in antibodies with MFI 5000–10 000. Nine of 18 candidates subsequently had a transplant. Posttransplant survival in these nine recipients was comparable to other pretransplant-sensitized recipients who did not receive therapy. In summary, an aggressive multi-modal desensitization protocol does not significantly reduce pretransplant HLA antibodies in a broadly sensitized lung transplant candidate cohort. PMID:24666831

  20. Design of highly sensitive multichannel bimetallic photonic crystal fiber biosensor

    Science.gov (United States)

    Hameed, Mohamed Farhat O.; Alrayk, Yassmin K. A.; Shaalan, Abdelhamid A.; El Deeb, Walid S.; Obayya, Salah S. A.

    2016-10-01

    A design of a highly sensitive multichannel biosensor based on photonic crystal fiber is proposed and analyzed. The suggested design has a silver layer as a plasmonic material coated by a gold layer to protect silver oxidation. The reported sensor is based on detection using the quasi transverse electric (TE) and quasi transverse magnetic (TM) modes, which offers the possibility of multichannel/multianalyte sensing. The numerical results are obtained using a finite element method with perfect matched layer boundary conditions. The sensor geometrical parameters are optimized to achieve high sensitivity for the two polarized modes. High-refractive index sensitivity of about 4750 nm/RIU (refractive index unit) and 4300 nm/RIU with corresponding resolutions of 2.1×10-5 RIU, and 2.33×10-5 RIU can be obtained according to the quasi TM and quasi TE modes of the proposed sensor, respectively. Further, the reported design can be used as a self-calibration biosensor within an unknown analyte refractive index ranging from 1.33 to 1.35 with high linearity and high accuracy. Moreover, the suggested biosensor has advantages in terms of compactness and better integration of microfluidics setup, waveguide, and metallic layers into a single structure.

  1. Sensitivity Analysis of FEAST-Metal Fuel Performance Code: Initial Results

    International Nuclear Information System (INIS)

    Edelmann, Paul Guy; Williams, Brian J.; Unal, Cetin; Yacout, Abdellatif

    2012-01-01

    This memo documents the completion of the LANL milestone, M3FT-12LA0202041, describing methodologies and initial results using FEAST-Metal. The FEAST-Metal code calculations for this work are being conducted at LANL in support of on-going activities related to sensitivity analysis of fuel performance codes. The objective is to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. This report summarizes our preliminary results for the sensitivity analysis using 6 calibration datasets for metallic fuel developed at ANL for EBR-II experiments. Sensitivity ranking methodology was deployed to narrow down the selected parameters for the current study. There are approximately 84 calibration parameters in the FEAST-Metal code, of which 32 were ultimately used in Phase II of this study. Preliminary results of this sensitivity analysis led to the following ranking of FEAST models for future calibration and improvements: fuel conductivity, fission gas transport/release, fuel creep, and precipitation kinetics. More validation data is needed to validate calibrated parameter distributions for future uncertainty quantification studies with FEAST-Metal. Results of this study also served to point out some code deficiencies and possible errors, and these are being investigated in order to determine root causes and to improve upon the existing code models.

  2. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  3. Sensitivity Analysis for the CLIC Damping Ring Inductive Adder

    CERN Document Server

    Holma, Janne

    2012-01-01

    The CLIC study is exploring the scheme for an electron-positron collider with high luminosity and a nominal centre-of-mass energy of 3 TeV. The CLIC pre-damping rings and damping rings will produce, through synchrotron radiation, ultra-low emittance beam with high bunch charge, necessary for the luminosity performance of the collider. To limit the beam emittance blow-up due to oscillations, the pulse generators for the damping ring kickers must provide extremely flat, high-voltage pulses. The specifications for the extraction kickers of the CLIC damping rings are particularly demanding: the flattop of the output pulse must be 160 ns duration, 12.5 kV and 250 A, with a combined ripple and droop of not more than ±0.02 %. An inductive adder allows the use of different modulation techniques and is therefore a very promising approach to meeting the specifications. PSpice has been utilised to carry out a sensitivity analysis of the predicted output pulse to the value of both individual and groups of circuit compon...

  4. Serine Protease Zymography: Low-Cost, Rapid, and Highly Sensitive RAMA Casein Zymography.

    Science.gov (United States)

    Yasumitsu, Hidetaro

    2017-01-01

    To detect serine protease activity by zymography, casein and CBB stain have been used as a substrate and a detection procedure, respectively. Casein zymography has been using substrate concentration at 1 mg/mL and employing conventional CBB stain. Although ordinary casein zymography provides reproducible results, it has several disadvantages including time-consuming and relative low sensitivity. Improved casein zymography, RAMA casein zymography, is rapid and highly sensitive. RAMA casein zymography completes the detection process within 1 h after incubation and increases the sensitivity at least by tenfold. In addition to serine protease, the method also detects metalloprotease 7 (MMP7, Matrilysin) with high sensitivity.

  5. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  6. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  7. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  8. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  9. Aluminum nano-cantilevers for high sensitivity mass sensors

    DEFF Research Database (Denmark)

    Davis, Zachary James; Boisen, Anja

    2005-01-01

    We have fabricated Al nano-cantilevers using a very simple one mask contact UV lithography technique with lateral dimensions under 500 nm and vertical dimensions of approximately 100 nm. These devices are demonstrated as highly sensitive mass sensors by measuring their dynamic properties. Further...

  10. Wear-Out Sensitivity Analysis Project Abstract

    Science.gov (United States)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  11. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    Science.gov (United States)

    Esposito, Gaetano

    identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  12. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring sensitiv......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator...

  13. Molecular structure and thermodynamic predictions to create highly sensitive microRNA biosensors

    International Nuclear Information System (INIS)

    Larkey, Nicholas E.; Brucks, Corinne N.; Lansing, Shan S.; Le, Sophia D.; Smith, Natasha M.; Tran, Victoria; Zhang, Lulu; Burrows, Sean M.

    2016-01-01

    Many studies have established microRNAs (miRNAs) as post-transcriptional regulators in a variety of intracellular molecular processes. Abnormal changes in miRNA have been associated with several diseases. However, these changes are sometimes subtle and occur at nanomolar levels or lower. Several biosensing hurdles for in situ cellular/tissue analysis of miRNA limit detection of small amounts of miRNA. Of these limitations the most challenging are selectivity and sensor degradation creating high background signals and false signals. Recently we developed a reporter+probe biosensor for let-7a that showed potential to mitigate false signal from sensor degradation. Here we designed reporter+probe biosensors for miR-26a-2-3p and miR-27a-5p to better understand the effect of thermodynamics and molecular structures of the biosensor constituents on the analytical performance. Signal changes from interactions between Cy3 and Cy5 on the reporters were used to understand structural aspects of the reporter designs. Theoretical thermodynamic values, single stranded conformations, hetero- and homodimerization structures, and equilibrium concentrations of the reporters and probes were used to interpret the experimental observations. Studies of the sensitivity and selectivity revealed 5–9 nM detection limits in the presence and absence of interfering off-analyte miRNAs. These studies will aid in determining how to rationally design reporter+probe biosensors to overcome hurdles associated with highly sensitive miRNA biosensing. - Highlights: • Challenges facing highly sensitive miRNA biosensor designs are addressed. • Thermodynamic and molecular structure design metrics for reporter+probe biosensors are proposed. • The influence of ideal and non-ideal reporter hairpin structures on reporter+probe formation and signal change are discussed. • 5–9 nM limits of detection were observed with no interference from off-analytes.

  14. Molecular structure and thermodynamic predictions to create highly sensitive microRNA biosensors

    Energy Technology Data Exchange (ETDEWEB)

    Larkey, Nicholas E.; Brucks, Corinne N.; Lansing, Shan S.; Le, Sophia D.; Smith, Natasha M.; Tran, Victoria; Zhang, Lulu; Burrows, Sean M., E-mail: sean.burrows@oregonstate.edu

    2016-02-25

    Many studies have established microRNAs (miRNAs) as post-transcriptional regulators in a variety of intracellular molecular processes. Abnormal changes in miRNA have been associated with several diseases. However, these changes are sometimes subtle and occur at nanomolar levels or lower. Several biosensing hurdles for in situ cellular/tissue analysis of miRNA limit detection of small amounts of miRNA. Of these limitations the most challenging are selectivity and sensor degradation creating high background signals and false signals. Recently we developed a reporter+probe biosensor for let-7a that showed potential to mitigate false signal from sensor degradation. Here we designed reporter+probe biosensors for miR-26a-2-3p and miR-27a-5p to better understand the effect of thermodynamics and molecular structures of the biosensor constituents on the analytical performance. Signal changes from interactions between Cy3 and Cy5 on the reporters were used to understand structural aspects of the reporter designs. Theoretical thermodynamic values, single stranded conformations, hetero- and homodimerization structures, and equilibrium concentrations of the reporters and probes were used to interpret the experimental observations. Studies of the sensitivity and selectivity revealed 5–9 nM detection limits in the presence and absence of interfering off-analyte miRNAs. These studies will aid in determining how to rationally design reporter+probe biosensors to overcome hurdles associated with highly sensitive miRNA biosensing. - Highlights: • Challenges facing highly sensitive miRNA biosensor designs are addressed. • Thermodynamic and molecular structure design metrics for reporter+probe biosensors are proposed. • The influence of ideal and non-ideal reporter hairpin structures on reporter+probe formation and signal change are discussed. • 5–9 nM limits of detection were observed with no interference from off-analytes.

  15. Uncertainty and sensitivity analysis in a Probabilistic Safety Analysis level-1

    International Nuclear Information System (INIS)

    Nunez Mc Leod, Jorge E.; Rivera, Selva S.

    1996-01-01

    A methodology for sensitivity and uncertainty analysis, applicable to a Probabilistic Safety Assessment Level I has been presented. The work contents are: correct association of distributions to parameters, importance and qualification of expert opinions, generations of samples according to sample sizes, and study of the relationships among system variables and systems response. A series of statistical-mathematical techniques are recommended along the development of the analysis methodology, as well as different graphical visualization for the control of the study. (author)

  16. Comparison of global sensitivity analysis techniques and importance measures in PSA

    International Nuclear Information System (INIS)

    Borgonovo, E.; Apostolakis, G.E.; Tarantola, S.; Saltelli, A.

    2003-01-01

    This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell-Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA

  17. Probabilistic sensitivity analysis of optimised preventive maintenance strategies for deteriorating infrastructure assets

    International Nuclear Information System (INIS)

    Daneshkhah, A.; Stocks, N.G.; Jeffrey, P.

    2017-01-01

    Efficient life-cycle management of civil infrastructure systems under continuous deterioration can be improved by studying the sensitivity of optimised preventive maintenance decisions with respect to changes in model parameters. Sensitivity analysis in maintenance optimisation problems is important because if the calculation of the cost of preventive maintenance strategies is not sufficiently robust, the use of the maintenance model can generate optimised maintenances strategies that are not cost-effective. Probabilistic sensitivity analysis methods (particularly variance based ones), only partially respond to this issue and their use is limited to evaluating the extent to which uncertainty in each input contributes to the overall output's variance. These methods do not take account of the decision-making problem in a straightforward manner. To address this issue, we use the concept of the Expected Value of Perfect Information (EVPI) to perform decision-informed sensitivity analysis: to identify the key parameters of the problem and quantify the value of learning about certain aspects of the life-cycle management of civil infrastructure system. This approach allows us to quantify the benefits of the maintenance strategies in terms of expected costs and in the light of accumulated information about the model parameters and aspects of the system, such as the ageing process. We use a Gamma process model to represent the uncertainty associated with asset deterioration, illustrating the use of EVPI to perform sensitivity analysis on the optimisation problem for age-based and condition-based preventive maintenance strategies. The evaluation of EVPI indices is computationally demanding and Markov Chain Monte Carlo techniques would not be helpful. To overcome this computational difficulty, we approximate the EVPI indices using Gaussian process emulators. The implications of the worked numerical examples discussed in the context of analytical efficiency and organisational

  18. Prototype of high resolution PET using resistive electrode position sensitive CdTe detectors

    International Nuclear Information System (INIS)

    Kikuchi, Yohei; Ishii, Keizo; Matsuyama, Shigeo; Yamazaki, Hiromichi

    2008-01-01

    Downsizing detector elements makes it possible that spatial resolutions of positron emission tomography (PET) cameras are improved very much. From this point of view, semiconductor detectors are preferable. To obtain high resolution, the pixel type or the multi strip type of semiconductor detectors can be used. However, in this case, there is a low packing ratio problem, because a dead area between detector arrays cannot be neglected. Here, we propose the use of position sensitive semiconductor detectors with resistive electrode. The CdTe detector is promising as a detector for PET camera because of its high sensitivity. In this paper, we report development of prototype of high resolution PET using resistive electrode position sensitive CdTe detectors. We made 1-dimensional position sensitive CdTe detectors experimentally by changing the electrode thickness. We obtained 750 A as an appropriate thickness of position sensitive detectors, and evaluated the performance of the detector using a collimated 241 Am source. A good position resolution of 1.2 mm full width half maximum (FWHM) was obtained. On the basis of the fundamental development of resistive electrode position sensitive detectors, we constructed a prototype of high resolution PET which was a dual head type and was consisted of thirty-two 1-dimensional position sensitive detectors. In conclusion, we obtained high resolutions which are 0.75 mm (FWHM) in transaxial, and 1.5 mm (FWHM) in axial. (author)

  19. A hydrogel biosensor for high selective and sensitive detection of amyloid-beta oligomers.

    Science.gov (United States)

    Sun, Liping; Zhong, Yong; Gui, Jie; Wang, Xianwu; Zhuang, Xiaorong; Weng, Jian

    2018-01-01

    Alzheimer's disease (AD) is a neurodegenerative disorder characterized by progressive cognitive and memory impairment. It is the most common neurological disease that causes dementia. Soluble amyloid-beta oligomers (AβO) in blood or cerebrospinal fluid (CSF) are the pathogenic biomarker correlated with AD. A simple electrochemical biosensor using graphene oxide/gold nanoparticles (GNPs) hydrogel electrode was developed in this study. Thiolated cellular prion protein (PrP C ) peptide probe was immobilized on GNPs of the hydrogel electrode to construct an AβO biosensor. Electrochemical impedance spectroscopy was utilized for AβO analysis. The specific binding between AβO and PrP C probes on the hydrogel electrode resulted in an increase in the electron-transfer resistance. The biosensor showed high specificity and sensitivity for AβO detection. It could selectively differentiate AβO from amyloid-beta (Aβ) monomers or fibrils. Meanwhile, it was highly sensitive to detect as low as 0.1 pM AβO in artificial CSF or blood plasma. The linear range for AβO detection is from 0.1 pM to 10 nM. This biosensor could be used as a cost-effective tool for early diagnosis of AD due to its high electrochemical performance and bionic structure.

  20. Low Cost, Low Power, High Sensitivity Magnetometer

    Science.gov (United States)

    2008-12-01

    which are used to measure the small magnetic signals from brain. Other types of vector magnetometers are fluxgate , coil based, and magnetoresistance...concentrator with the magnetometer currently used in Army multimodal sensor systems, the Brown fluxgate . One sees the MEMS fluxgate magnetometer is...Guedes, A.; et al., 2008: Hybrid - LOW COST, LOW POWER, HIGH SENSITIVITY MAGNETOMETER A.S. Edelstein*, James E. Burnette, Greg A. Fischer, M.G

  1. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  2. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  3. Highly sensitive assay for tyrosine hydroxylase activity by high-performance liquid chromatography.

    Science.gov (United States)

    Nagatsu, T; Oka, K; Kato, T

    1979-07-21

    A highly sensitive assay for tyrosine hydroxylase (TH) activity by high-performance liquid chromatography (HPLC) with amperometric detection was devised based on the rapid isolation of enzymatically formed DOPA by a double-column procedure, the columns fitted together sequentially (the top column of Amberlite CG-50 and the bottom column of aluminium oxide). DOPA was adsorbed on the second aluminium oxide column, then eluted with 0.5 M hydrochloric acid, and assayed by HPLC with amperometric detection. D-Tyrosine was used for the control. alpha-Methyldopa was added to the incubation mixture as an internal standard after incubation. This assay was more sensitive than radioassays and 5 pmol of DOPA formed enzymatically could be measured in the presence of saturating concentrations of tyrosine and 6-methyltetrahydropterin. The TH activity in 2 mg of human putamen could be easily measured, and this method was found to be particularly suitable for the assay of TH activity in a small number of nuclei from animal and human brain.

  4. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik

    2009-01-01

    satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....

  5. Sensitivity analysis of network DEA illustrated in branch banking

    OpenAIRE

    N. Avkiran

    2010-01-01

    Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...

  6. Sensitive high performance liquid chromatographic method for the ...

    African Journals Online (AJOL)

    A new simple, sensitive, cost-effective and reproducible high performance liquid chromatographic (HPLC) method for the determination of proguanil (PG) and its metabolites, cycloguanil (CG) and 4-chlorophenylbiguanide (4-CPB) in urine and plasma is described. The extraction procedure is a simple three-step process ...

  7. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  8. Transcriptome analysis by cDNA-AFLP of Suillus luteus Cd-tolerant and Cd-sensitive isolates.

    Science.gov (United States)

    Ruytinx, Joske; Craciun, Adrian R; Verstraelen, Karen; Vangronsveld, Jaco; Colpaert, Jan V; Verbruggen, Nathalie

    2011-04-01

    The ectomycorrhizal basidiomycete Suillus luteus (L.:Fr.), a typical pioneer species which associates with young pine trees colonizing disturbed sites, is a common root symbiont found at heavy metal contaminated sites. Three Cd-sensitive and three Cd-tolerant isolates of S. luteus, isolated respectively from non-polluted and a heavy metal-polluted site in Limburg (Belgium), were used for a transcriptomic analysis. We identified differentially expressed genes by cDNA-AFLP analysis. The possible roles of some of the encoded proteins in heavy metal (Cd) accumulation and tolerance are discussed. Despite the high conservation of coding sequences in S. luteus, a large intraspecific variation in the transcript profiles was observed. This variation was as large in Cd-tolerant as in sensitive isolates and may help this pioneer species to adapt to novel environments.

  9. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  10. Sensitivity Analysis of Structures by Virtual Distortion Method

    DEFF Research Database (Denmark)

    Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard

    1991-01-01

    are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...

  11. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  12. Application of Monte Carlo filtering method in regional sensitivity analysis of AASHTOWare Pavement ME design

    Directory of Open Access Journals (Sweden)

    Zhong Wu

    2017-04-01

    Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.

  13. Review of high-sensitivity Radon studies

    Science.gov (United States)

    Wojcik, M.; Zuzel, G.; Simgen, H.

    2017-10-01

    A challenge in many present cutting-edge particle physics experiments is the stringent requirements in terms of radioactive background. In peculiar, the prevention of Radon, a radioactive noble gas, which occurs from ambient air and it is also released by emanation from the omnipresent progenitor Radium. In this paper we review various high-sensitivity Radon detection techniques and approaches, applied in the experiments looking for rare nuclear processes happening at low energies. They allow to identify, quantitatively measure and finally suppress the numerous sources of Radon in the detectors’ components and plants.

  14. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  15. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  16. Simple Sensitivity Analysis for Orion GNC

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  17. Recent trends in high spin sensitivity magnetic resonance

    Science.gov (United States)

    Blank, Aharon; Twig, Ygal; Ishay, Yakir

    2017-07-01

    new ideas, show how these limiting factors can be mitigated to significantly improve the sensitivity of induction detection. Finally, we outline some directions for the possible applications of high-sensitivity induction detection in the field of electron spin resonance.

  18. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  19. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  20. Sensitization trajectories in childhood revealed by using a cluster analysis

    DEFF Research Database (Denmark)

    Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik

    2017-01-01

    Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...

  1. A wide-bandwidth and high-sensitivity robust microgyroscope

    International Nuclear Information System (INIS)

    Sahin, Korhan; Sahin, Emre; Akin, Tayfun; Alper, Said Emre

    2009-01-01

    This paper reports a microgyroscope design concept with the help of a 2 degrees of freedom (DoF) sense mode to achieve a wide bandwidth without sacrificing mechanical and electronic sensitivity and to obtain robust operation against variations under ambient conditions. The design concept is demonstrated with a tuning fork microgyroscope fabricated with an in-house silicon-on-glass micromachining process. When the fabricated gyroscope is operated with a relatively wide bandwidth of 1 kHz, measurements show a relatively high raw mechanical sensitivity of 131 µV (° s −1 ) −1 . The variation in the amplified mechanical sensitivity (scale factor) of the gyroscope is measured to be less than 0.38% for large ambient pressure variations such as from 40 to 500 mTorr. The bias instability and angle random walk of the gyroscope are measured to be 131° h −1 and 1.15° h −1/2 , respectively

  2. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  3. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  4. Integration of a highly ordered gold nanowires array with glucose oxidase for ultra-sensitive glucose detection

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Jiewu [NanoScience and Sensor Technology Research Group, School of Applied Sciences and Engineering, Monash University, Gippsland Campus, Churchill 3842, VIC Australia (Australia); Laboratory of Functional Nanomaterials and Devices, School of Materials Science and Engineering, Hefei University of Technology, Hefei 230009, Anhui (China); Adeloju, Samuel B., E-mail: sam.adeloju@monash.edu [NanoScience and Sensor Technology Research Group, School of Applied Sciences and Engineering, Monash University, Gippsland Campus, Churchill 3842, VIC Australia (Australia); Wu, Yucheng, E-mail: ycwu@hfut.edu.cn [Laboratory of Functional Nanomaterials and Devices, School of Materials Science and Engineering, Hefei University of Technology, Hefei 230009, Anhui (China)

    2014-01-27

    Graphical abstract: -- Highlights: •Successfully synthesised highly-ordered gold nanowires array with an AAO template. •Fabricated an ultra-sensitive glucose nanobiosensor with the gold nanowires array. •Achieved sensitivity as high as 379.0 μA cm{sup −2} mM{sup −1} and detection limit as low as 50 nM. •Achieved excellent anti-interference with aid of Nafion membrane towards UA and AA. •Enabled successful detection and quantification of glucose in human blood serum. -- Abstract: A highly sensitive amperometric nanobiosensor has been developed by integration of glucose oxidase (GO{sub x}) with a gold nanowires array (AuNWA) by cross-linking with a mixture of glutaraldehyde (GLA) and bovine serum albumin (BSA). An initial investigation of the morphology of the synthesized AuNWA by field emission scanning electron microscopy (FESEM) and field emission transmission electron microscopy (FETEM) revealed that the nanowires array was highly ordered with rough surface, and the electrochemical features of the AuNWA with/without modification were also investigated. The integrated AuNWA–BSA–GLA–GO{sub x} nanobiosensor with Nafion membrane gave a very high sensitivity of 298.2 μA cm{sup −2} mM{sup −1} for amperometric detection of glucose, while also achieving a low detection limit of 0.1 μM, and a wide linear range of 5–6000 μM. Furthermore, the nanobiosensor exhibited excellent anti-interference ability towards uric acid (UA) and ascorbic acid (AA) with the aid of Nafion membrane, and the results obtained for the analysis of human blood serum indicated that the device is capable of glucose detection in real samples.

  5. Sensitivity Analysis to Control the Far-Wake Unsteadiness Behind Turbines

    Directory of Open Access Journals (Sweden)

    Esteban Ferrer

    2017-10-01

    Full Text Available We explore the stability of wakes arising from 2D flow actuators based on linear momentum actuator disc theory. We use stability and sensitivity analysis (using adjoints to show that the wake stability is controlled by the Reynolds number and the thrust force (or flow resistance applied through the turbine. First, we report that decreasing the thrust force has a comparable stabilising effect to a decrease in Reynolds numbers (based on the turbine diameter. Second, a discrete sensitivity analysis identifies two regions for suitable placement of flow control forcing, one close to the turbines and one far downstream. Third, we show that adding a localised control force, in the regions identified by the sensitivity analysis, stabilises the wake. Particularly, locating the control forcing close to the turbines results in an enhanced stabilisation such that the wake remains steady for significantly higher Reynolds numbers or turbine thrusts. The analysis of the controlled flow fields confirms that modifying the velocity gradient close to the turbine is more efficient to stabilise the wake than controlling the wake far downstream. The analysis is performed for the first flow bifurcation (at low Reynolds numbers which serves as a foundation of the stabilization technique but the control strategy is tested at higher Reynolds numbers in the final section of the paper, showing enhanced stability for a turbulent flow case.

  6. Multiplex enrichment quantitative PCR (ME-qPCR): a high-throughput, highly sensitive detection method for GMO identification.

    Science.gov (United States)

    Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang

    2017-04-01

    Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for

  7. BH3105 type neutron dose equivalent meter of high sensitivity

    International Nuclear Information System (INIS)

    Ji Changsong; Zhang Enshan; Yang Jianfeng; Zhang Hong; Huang Jiling

    1995-10-01

    It is noted that to design a neutron dose meter of high sensitivity is almost impossible in the frame of traditional designing principle--'absorption net principle'. Based on a newly proposed principle of obtaining neutron dose equi-biological effect adjustment--' absorption stick principle', a brand-new neutron dose-equivalent meter with high neutron sensitivity BH3105 has been developed. Its sensitivity reaches 10 cps/(μSv·h -1 ), which is 18∼40 times higher than one of foreign products of the same kind and is 10 4 times higher than that of domestic FJ342 neutron rem-meter. BH3105 has a measurement range from 0.1μSv/h to 1 Sv/h which is 1 or 2 orders wider than that of the other's. It has the advanced properties of gamma-resistance, energy response, orientation, etc. (6 tabs., 5 figs.)

  8. Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.

    Science.gov (United States)

    Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J

    2014-01-01

    A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.

  9. Monte carlo calculation of energy-dependent response of high-sensitive neutron monitor, HISENS

    International Nuclear Information System (INIS)

    Imanaka, Tetsuji; Ebisawa, Tohru; Kobayashi, Keiji; Koide, Hiroaki; Seo, Takeshi; Kawano, Shinji

    1988-01-01

    A highly sensitive neutron monitor system, HISENS, has been developed to measure leakage neutrons from nuclear facilities. The counter system of HISENS contains a detector bank which consists of ten cylindrical proportional counters filled with 10 atm 3 He gas and a paraffin moderator mounted in an aluminum case. The size of the detector bank is 56 cm high, 66 cm wide and 10 cm thick. It is revealed by a calibration experiment using an 241 Am-Be neutron source that the sensitivity of HISENS is about 2000 times as large as that of a typical commercial rem-counter. Since HISENS is designed to have a high sensitivity in a wide range of neutron energy, the shape of its energy dependent response curve cannot be matched to that of the dose equivalent conversion factor. To estimate dose equivalent values from neutron counts by HISENS, it is necessary to know the energy and angular characteristics of both HISENS and the neutron field. The area of one side of the detector bank is 3700 cm 2 and the detection efficiency in the constant region of the response curve is about 30 %. Thus, the sensitivity of HISENS for this energy range is 740 cps/(n/cm 2 /sec). This value indicates the extremely high sensitivity of HISENS as compared with exsisting highly sensitive neutron monitors. (Nogami, K.)

  10. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    Science.gov (United States)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  11. CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST.

    OpenAIRE

    Trinca, RB; Perles, CE; Volpe, PLO

    2009-01-01

    CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST The high cost of sensitivity commercial calorimeters may represent an obstacle for many calorimetric research groups. This work describes (fie construction and calibration of a batch differential heat conduction calorimeter with sample cells volumes of about 400 mu L. The calorimeter was built using two small high sensibility square Peltier thermoelectric sensors and the total cost was estimated to be about...

  12. Quantum dot bio-conjugate: as a western blot probe for highly sensitive detection of cellular proteins

    Energy Technology Data Exchange (ETDEWEB)

    Kale, Sonia [Agharkar Research Institute (India); Kale, Anup [University of Alabama, Center for Materials for Information Technology (United States); Gholap, Haribhau; Rana, Abhimanyu [National Chemical Laboratory, Physical and Materials Chemistry Division (India); Desai, Rama [National Centre for Cell Science (India); Banpurkar, Arun [University of Pune, Department of Physics (India); Ogale, Satishchandra, E-mail: sb.ogale@ncl.res.in [National Chemical Laboratory, Physical and Materials Chemistry Division (India); Shastry, Padma, E-mail: padma@nccs.res.in [National Centre for Cell Science (India)

    2012-03-15

    In the present study, we report a quantum dot (QD)-tailored western blot analysis for a sensitive, rapid and flexible detection of the nuclear and cytoplasmic proteins. Highly luminescent CdTe and (CdTe)ZnS QDs are synthesized by aqueous method. High resolution transmission electron microscopy, Raman spectroscopy, fourier transform infrared spectroscopy, fluorescence spectroscopy and X-ray diffraction are used to characterize the properties of the quantum dots. The QDs are functionalized with antibodies of prostate apoptosis response-4 (Par-4), poly(ADP-ribose) polymerases and {beta} actin to specifically bind with the proteins localized in the nucleus and cytoplasm of the cells, respectively. The QD-conjugated antibodies are used to overcome the limitations of conventional western blot technique. The sensitivity and rapidity of protein detection in QD-based approach is very high, with detection limits up to 10 pg of protein. In addition, these labels provide the capability of enhanced identification and localization of marker proteins in intact cells by confocal laser scanning microscopy.

  13. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  14. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  15. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    International Nuclear Information System (INIS)

    Wan, C.; Cao, L.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  16. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Wan, C.; Cao, L.; Wu, H.; Zu, T., E-mail: chenghuiwan@stu.xjtu.edu.cn, E-mail: caolz@mail.xjtu.edu.cn, E-mail: hongchun@mail.xjtu.edu.cn, E-mail: tiejun@mail.xjtu.edu.cn [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Shen, W., E-mail: Wei.Shen@cnsc-ccsn.gc.ca [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-07-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  17. Effect of vitamin D supplementation on the level of circulating high-sensitivity C-reactive protein: a meta-analysis of randomized controlled trials.

    Science.gov (United States)

    Chen, Neng; Wan, Zhongxiao; Han, Shu-Fen; Li, Bing-Yan; Zhang, Zeng-Li; Qin, Li-Qiang

    2014-06-10

    Vitamin D might elicit protective effects against cardiovascular disease by decreasing the level of circulating high-sensitivity C-reactive protein (hs-CRP), an inflammatory marker. Thus, we conducted a meta-analysis of randomized controlled trials to evaluate the association of vitamin D supplementation with circulating hs-CRP level. A systematic literature search was conducted in September 2013 (updated in February 2014) via PubMed, Web of Science, and Cochrane library to identify eligible studies. Either a fixed-effects or a random-effects model was used to calculate pooled effects. The results of the meta-analysis of 10 trials involving a total of 924 participants showed that vitamin D supplementation significantly decreased the circulating hs-CRP level by 1.08 mg/L (95% CI, -2.13, -0.03), with the evidence of heterogeneity. Subgroup analysis suggested a higher reduction of 2.21 mg/L (95% CI, -3.50, -0.92) among participants with baseline hs-CRP level ≥5 mg/L. Meta-regression analysis further revealed that baseline hs-CRP level, supplemental dose of vitamin D and intervention duration together may be attributed to the heterogeneity across studies. In summary, vitamin D supplementation is beneficial for the reduction of circulating hs-CRP. However, the result should be interpreted with caution because of the evidence of heterogeneity.

  18. Sensitivity analysis of hybrid power systems using Power Pinch Analysis considering Feed-in Tariff

    International Nuclear Information System (INIS)

    Mohammad Rozali, Nor Erniza; Wan Alwi, Sharifah Rafidah; Manan, Zainuddin Abdul; Klemeš, Jiří Jaromír

    2016-01-01

    Feed-in Tariff (FiT) has been one of the most effective policies in accelerating the development of renewable energy (RE) projects. The amount of RE electricity in the FiT purchase agreement is an important decision that has to be made by the RE project developers. They have to consider various crucial factors associated with RE system operation as well as its stochastic nature. The presented work aims to assess the sensitivity and profitability of a hybrid power system (HPS) in cases of RE system failure or shutdown. The amount of RE electricity for the FiT purchase agreement in various scenarios was determined using a novel tool called On-Grid Problem Table based on the Power Pinch Analysis (PoPA). A sensitivity table has also been introduced to assist planners to evaluate the effects of the RE system's failure on the profitability of the HPS. This table offers insights on the variance of the RE electricity. The sensitivity analysis of various possible scenarios shows that the RE projects can still provide financial benefits via the FiT, despite the losses incurred from the penalty levied. - Highlights: • A Power Pinch Analysis (PoPA) tool to assess the economics of an HPS with FiT. • The new On-Grid Problem Table for targeting the available RE electricity for FiT sale. • A sensitivity table showing the effect of RE electricity changes on the HPS profitability.

  19. Reward and Punishment Sensitivity in Children with ADHD: Validating the Sensitivity to Punishment and Sensitivity to Reward Questionnaire for Children (SPSRQ-C)

    OpenAIRE

    Luman, Marjolein; van Meel, Catharina S.; Oosterlaan, Jaap; Geurts, Hilde M.

    2011-01-01

    This study validates the Sensitivity to Punishment and Sensitivity to Reward Questionnaire for children (SPSRQ-C), using a Dutch sample of 1234 children between 6-13 years old. Factor analysis determined that a 4-factor and a 5-factor solution were best fitting, explaining 41% and 50% of the variance respectively. The 4-factor model was highly similar to the original SPSRQ factors found in adults (Punishment Sensitivity, Reward Responsivity, Impulsivity/ Fun-Seeking, and Drive). The 5-factor ...

  20. A highly sensitive and specific capacitive aptasensor for rapid and label-free trace analysis of Bisphenol A (BPA) in canned foods.

    Science.gov (United States)

    Mirzajani, Hadi; Cheng, Cheng; Wu, Jayne; Chen, Jiangang; Eda, Shigotoshi; Najafi Aghdam, Esmaeil; Badri Ghavifekr, Habib

    2017-03-15

    A rapid, highly sensitive, specific and low-cost capacitive affinity biosensor is presented here for label-free and single step detection of Bisphenol A (BPA). The sensor design allows rapid prototyping at low-cost using printed circuit board material by benchtop equipment. High sensitivity detection is achieved through the use of a BPA-specific aptamer as probe molecule and large electrodes to enhance AC-electroelectrothermal effect for long-range transport of BPA molecules toward electrode surface. Capacitive sensing technique is used to determine the bounded BPA level by measuring the sample/electrode interfacial capacitance of the sensor. The developed biosensor can detect BPA level in 20s and exhibits a large linear range from 1 fM to 10 pM, with a limit of detection (LOD) of 152.93 aM. This biosensor was applied to test BPA in canned food samples and could successfully recover the levels of spiked BPA. This sensor technology is demonstrated to be highly promising and reliable for rapid, sensitive and on-site monitoring of BPA in food samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. B1 -sensitivity analysis of quantitative magnetization transfer imaging.

    Science.gov (United States)

    Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce

    2018-01-01

    To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. A high sensitivity process variation sensor utilizing sub-threshold operation

    OpenAIRE

    Meterelliyoz, Mesut; Song, Peilin; Stellari, Franco; Kulkarni, Jaydeep P.; Roy, Kaushik

    2008-01-01

    In this paper, we propose a novel low-power, bias-free, high-sensitivity process variation sensor for monitoring random variations in the threshold voltage. The proposed sensor design utilizes the exponential current-voltage relationship of sub-threshold operation thereby improving the sensitivity by 2.3X compared to the above-threshold operation. A test-chip containing 128 PMOS and 128 NMOS devices has been fabri...

  3. Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone

    International Nuclear Information System (INIS)

    Hayes, F.; Jones, M.L.M.; Mills, G.; Ashmore, M.

    2007-01-01

    This study identified 83 species from existing publications suitable for inclusion in a database of sensitivity of species to ozone (OZOVEG database). An index, the relative sensitivity to ozone, was calculated for each species based on changes in biomass in order to test for species traits associated with ozone sensitivity. Meta-analysis of the ozone sensitivity data showed a wide inter-specific range in response to ozone. Some relationships in comparison to plant physiological and ecological characteristics were identified. Plants of the therophyte lifeform were particularly sensitive to ozone. Species with higher mature leaf N concentration were more sensitive to ozone than those with lower leaf N concentration. Some relationships between relative sensitivity to ozone and Ellenberg habitat requirements were also identified. In contrast, no relationships between relative sensitivity to ozone and mature leaf P concentration, Grime's CSR strategy, leaf longevity, flowering season, stomatal density and maximum altitude were found. The relative sensitivity of species and relationships with plant characteristics identified in this study could be used to predict sensitivity to ozone of untested species and communities. - Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone showed some relationships with physiological and ecological characteristics

  4. A High Sensitivity IDC-Electronic Tongue Using Dielectric/Sensing Membranes with Solvatochromic Dyes

    Directory of Open Access Journals (Sweden)

    Md. Rajibur Rahaman Khan

    2016-05-01

    Full Text Available In this paper, an electronic tongue/taste sensor array containing different interdigitated capacitor (IDC sensing elements to detect different types of tastes, such as sweetness (glucose, saltiness (NaCl, sourness (HCl, bitterness (quinine-HCl, and umami (monosodium glutamate is proposed. We present for the first time an IDC electronic tongue using sensing membranes containing solvatochromic dyes. The proposed highly sensitive (30.64 mV/decade sensitivity IDC electronic tongue has fast response and recovery times of about 6 s and 5 s, respectively, with extremely stable responses, and is capable of linear sensing performance (R2 ≈ 0.985 correlation coefficient over the wide dynamic range of 1 µM to 1 M. The designed IDC electronic tongue offers excellent reproducibility, with a relative standard deviation (RSD of about 0.029. The proposed device was found to have better sensing performance than potentiometric-, cascoded compatible lateral bipolar transistor (C-CLBT-, Electronic Tongue (SA402-, and fiber-optic-based taste sensing systems in what concerns dynamic range width, response time, sensitivity, and linearity. Finally, we applied principal component analysis (PCA to distinguish between various kinds of taste in mixed taste compounds.

  5. Highly sensitive and selective cholesterol biosensor based on direct electron transfer of hemoglobin.

    Science.gov (United States)

    Zhao, Changzhi; Wan, Li; Jiang, Li; Wang, Qin; Jiao, Kui

    2008-12-01

    A cholesterol biosensor based on direct electron transfer of a hemoglobin-encapsulated chitosan-modified glassy carbon electrode has been developed for highly sensitive and selective analysis of serum samples. Modified by films containing hemoglobin and cholesterol oxidase, the electrode was prepared by encapsulation of enzyme in chitosan matrix. The hydrogen peroxide produced by the catalytic oxidation of cholesterol by cholesterol oxidase was reduced electrocatalytically by immobilized hemoglobin and used to obtain a sensitive amperometric response to cholesterol. The linear response of cholesterol concentrations ranged from 1.00 x 10(-5) to 6.00 x 10(-4) mol/L, with a correlation coefficient of 0.9969 and estimated detection limit of cholesterol of 9.5 micromol/L at a signal/noise ratio of 3. The cholesterol biosensor can efficiently exclude interference by the commonly coexisting ascorbic acid, uric acid, dopamine, and epinephrine. The sensitivity to the change in the concentration of cholesterol as the slope of the calibration curve was 0.596 A/M. The relative standard deviation was under 4.0% (n=5) for the determination of real samples. The biosensor is satisfactory in the determination of human serum samples.

  6. Code development for eigenvalue total sensitivity analysis and total uncertainty analysis

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Zu, Tiejun; Shen, Wei

    2015-01-01

    Highlights: • We develop a new code for total sensitivity and uncertainty analysis. • The implicit effects of cross sections can be considered. • The results of our code agree well with TSUNAMI-1D. • Detailed analysis for origins of implicit effects is performed. - Abstract: The uncertainties of multigroup cross sections notably impact eigenvalue of neutron-transport equation. We report on a total sensitivity analysis and total uncertainty analysis code named UNICORN that has been developed by applying the direct numerical perturbation method and statistical sampling method. In order to consider the contributions of various basic cross sections and the implicit effects which are indirect results of multigroup cross sections through resonance self-shielding calculation, an improved multigroup cross-section perturbation model is developed. The DRAGON 4.0 code, with application of WIMSD-4 format library, is used by UNICORN to carry out the resonance self-shielding and neutron-transport calculations. In addition, the bootstrap technique has been applied to the statistical sampling method in UNICORN to obtain much steadier and more reliable uncertainty results. The UNICORN code has been verified against TSUNAMI-1D by analyzing the case of TMI-1 pin-cell. The numerical results show that the total uncertainty of eigenvalue caused by cross sections can reach up to be about 0.72%. Therefore the contributions of the basic cross sections and their implicit effects are not negligible

  7. Sensitivity to apomorphine-induced yawning and hypothermia in rats eating standard or high-fat chow.

    Science.gov (United States)

    Baladi, Michelle G; Thomas, Yvonne M; France, Charles P

    2012-07-01

    Feeding conditions modify sensitivity to indirect- and direct-acting dopamine receptor agonists as well as the development of sensitization to these drugs. This study examined whether feeding condition affects acute sensitivity to apomorphine-induced yawning or changes in sensitivity that occur over repeated drug administration. Quinpirole-induced yawning was also evaluated to see whether sensitization to apomorphine confers cross-sensitization to quinpirole. Drug-induced yawning was measured in different groups of male Sprague Dawley rats (n = 6/group) eating high (34.3%) fat or standard (5.7% fat) chow. Five weeks of eating high-fat chow rendered otherwise drug-naïve rats more sensitive to apomorphine- (0.01-1.0 mg/kg, i.p.) and quinpirole- (0.0032-0.32 mg/kg, i.p.) induced yawning, compared with rats eating standard chow. In other rats, tested weekly with apomorphine, sensitivity to apomorphine-induced yawning increased (sensitization) similarly in rats with free access to standard or high-fat chow; conditioning to the testing environment appeared to contribute to increased yawning in both groups of rats. Food restriction decreased sensitivity to apomorphine-induced yawning across five weekly tests. Rats with free access to standard or high-fat chow and sensitized to apomorphine were cross-sensitized to quinpirole-induced yawning. The hypothermic effects of apomorphine and quinpirole were not different regardless of drug history or feeding condition. Eating high-fat chow or restricting access to food alters sensitivity to direct-acting dopamine receptor agonists (apomorphine, quinpirole), although the relative contribution of drug history and dietary conditions to sensitivity changes appears to vary among agonists.

  8. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM

  9. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.

  10. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  11. Nordic reference study on uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Hirschberg, S.; Jacobsson, P.; Pulkkinen, U.; Porn, K.

    1989-01-01

    This paper provides a review of the first phase of Nordic reference study on uncertainty and sensitivity analysis. The main objective of this study is to use experiences form previous Nordic Benchmark Exercises and reference studies concerning critical modeling issues such as common cause failures and human interactions, and to demonstrate the impact of associated uncertainties on the uncertainty of the investigated accident sequence. This has been done independently by three working groups which used different approaches to modeling and to uncertainty analysis. The estimated uncertainty interval for the analyzed accident sequence is large. Also the discrepancies between the groups are substantial but can be explained. Sensitivity analyses which have been carried out concern e.g. use of different CCF-quantification models, alternative handling of CCF-data, time windows for operator actions and time dependences in phase mission operation, impact of state-of-knowledge dependences and ranking of dominating uncertainty contributors. Specific findings with respect to these issues are summarized in the paper

  12. Guided-Mode-Leaky-Mode-Guided-Mode Fiber Interferometer and Its High Sensitivity Refractive Index Sensing Technology

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2016-06-01

    Full Text Available A cascaded symmetrical dual-taper Mach-Zehnder interferometer structure based on guided-mode and leaky-mode interference is proposed in this paper. Firstly, the interference spectrum characteristics of interferometer has been analyzed by the Finite Difference-Beam Propagation Method (FD-BPM. When the diameter of taper waist is 20 μm–30 μm, dual-taper length is 1 mm and taper distance is 4 cm–6 cm, the spectral contrast is higher, which is suitable for sensing. Secondly, experimental research on refractive index sensitivity is carried out. A refractive index sensitivity of 62.78 nm/RIU (refractive index unit can achieved in the RI range of 1.3333–1.3792 (0%~25% NaCl solution, when the sensor structure parameters meet the following conditions: diameter of taper waist is 24 μm, dual-taper length is 837 μm and taper distance is 5.5 cm. The spectrum contrast is 0.8 and measurement resolution is 1.6 × 10−5 RIU. The simulation analysis is highly consistent with experimental results. Research shows that the sensor has promising application in low RI fields where high-precision measurement is required due to its high sensitivity and stability.

  13. Eating high-fat chow enhances sensitization to the effects of methamphetamine on locomotion in rats.

    Science.gov (United States)

    McGuire, Blaine A; Baladi, Michelle G; France, Charles P

    2011-05-11

    Eating high-fat chow can modify the effects of drugs acting directly or indirectly on dopamine systems and repeated intermittent drug administration can markedly increase sensitivity (i.e., sensitization) to the behavioral effects of indirect-acting dopamine receptor agonists (e.g., methamphetamine). This study examined whether eating high-fat chow alters the sensitivity of male Sprague Dawley rats to the locomotor stimulating effects of acute or repeated administration of methamphetamine. The acute effects of methamphetamine on locomotion were not different between rats (n=6/group) eating high-fat or standard chow for 1 or 4 weeks. Sensitivity to the effects of methamphetamine (0.1-10mg/kg, i.p.) increased progressively across 4 once per week tests; this sensitization developed more rapidly and to a greater extent in rats eating high-fat chow as compared with rats eating standard chow. Thus, while eating high-fat chow does not appear to alter sensitivity of rats to acutely-administered methamphetamine, it significantly increases the sensitization that develops to repeated intermittent administration of methamphetamine. These data suggest that eating certain foods influences the development of sensitization to drugs acting on dopamine systems. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  15. Global sensitivity analysis of Alkali-Surfactant-Polymer enhanced oil recovery processes

    Energy Technology Data Exchange (ETDEWEB)

    Carrero, Enrique; Queipo, Nestor V.; Pintos, Salvador; Zerpa, Luis E. [Applied Computing Institute, Faculty of Engineering, University of Zulia, Zulia (Venezuela)

    2007-08-15

    After conventional waterflooding processes the residual oil in the reservoir remains as a discontinuous phase in the form of oil drops trapped by capillary forces and is likely to be around 70% of the original oil in place (OOIP). The EOR method so-called Alkaline-Surfactant-Polymer (ASP) flooding has been proved to be effective in reducing the oil residual saturation in laboratory experiments and field projects through reduction of interfacial tension and mobility ratio between oil and water phases. A critical step for the optimal design and control of ASP recovery processes is to find the relative contributions of design variables such as, slug size and chemical concentrations, in the variability of given performance measures (e.g., net present value, cumulative oil recovery), considering a heterogeneous and multiphase petroleum reservoir (sensitivity analysis). Previously reported works using reservoir numerical simulation have been limited to local sensitivity analyses because a global sensitivity analysis may require hundreds or even thousands of computationally expensive evaluations (field scale numerical simulations). To overcome this issue, a surrogate-based approach is suggested. Surrogate-based analysis/optimization makes reference to the idea of constructing an alternative fast model (surrogate) from numerical simulation data and using it for analysis/optimization purposes. This paper presents an efficient global sensitivity approach based on Sobol's method and multiple surrogates (i.e., Polynomial Regression, Kriging, Radial Base Functions and a Weighed Adaptive Model), with the multiple surrogates used to address the uncertainty in the analysis derived from plausible alternative surrogate-modeling schemes. The proposed approach was evaluated in the context of the global sensitivity analysis of a field scale Alkali-Surfactant-Polymer flooding process. The design variables and the performance measure in the ASP process were selected as slug size

  16. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    Science.gov (United States)

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  17. Sensitivity Analysis of features in tolerancing based on constraint function level sets

    International Nuclear Information System (INIS)

    Ziegler, Philipp; Wartzack, Sandro

    2015-01-01

    Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use

  18. Global sensitivity analysis of water age and temperature for informing salmonid disease management

    Science.gov (United States)

    Javaheri, Amir; Babbar-Sebens, Meghna; Alexander, Julie; Bartholomew, Jerri; Hallett, Sascha

    2018-06-01

    Many rivers in the Pacific Northwest region of North America are anthropogenically manipulated via dam operations, leading to system-wide impacts on hydrodynamic conditions and aquatic communities. Understanding how dam operations alter abiotic and biotic variables is important for designing management actions. For example, in the Klamath River, dam outflows could be manipulated to alter water age and temperature to reduce risk of parasite infections in salmon by diluting or altering viability of parasite spores. However, sensitivity of water age and temperature to the riverine conditions such as bathymetry can affect outcomes from dam operations. To examine this issue in detail, we conducted a global sensitivity analysis of water age and temperature to a comprehensive set of hydraulics and meteorological parameters in the Klamath River, California, where management of salmonid disease is a high priority. We applied an analysis technique, which combined Latin-hypercube and one-at-a-time sampling methods, and included simulation runs with the hydrodynamic numerical model of the Lower Klamath. We found that flow rate and bottom roughness were the two most important parameters that influence water age. Water temperature was more sensitive to inflow temperature, air temperature, solar radiation, wind speed, flow rate, and wet bulb temperature respectively. Our results are relevant for managers because they provide a framework for predicting how water within 'high infection risk' sections of the river will respond to dam water (low infection risk) input. Moreover, these data will be useful for prioritizing the use of water age (dilution) versus temperature (spore viability) under certain contexts when considering flow manipulation as a method to reduce risk of infection and disease in Klamath River salmon.

  19. High sensitivity amplifier/discriminator for PWC's

    International Nuclear Information System (INIS)

    Hansen, S.

    1983-01-01

    The facility support group at Fermilab is designing and building a general purpose beam chamber for use in several locations at the laboratory. This pwc has 128 wires per plane spaced 1 mm apart. An initial production of 25 signal planes is anticipated. In proportional chambers, the size of the signal depends exponentially on the charge stored per unit of length along the anode wire. As the wire spacing decreases, the capacitance per unit length decreases, thereby requiring increased applied voltage to restore the necessary charge per unit length. In practical terms, this phenomenon is responsible for difficulties in constructing chambers with less than 2 mm wire spacing. 1 mm chambers, therefore, are frequently operated very near to their breakdown point and/or a high gain gas containing organic compounds such as magic gas is used. This argon/iso-butane mixture has three drawbacks: it is explosive when exposed to the air, it leaves a residue on the wires after extended use and is costly. An amplifier with higher sensitivity would reduce the problems associated with operating chambers with small wire spacings and allow them to be run a safe margin below their breakdown voltage even with an inorganic gas mixture such as argon/CO2, this eliminating the need to use magic gas. Described here is a low cost amplifier with a usable threshold of less than 0.5 μA. Data on the performance of this amplifier/discriminator in operation on a prototype beam chamber are given. This data shows the advantages of the high sensitivity of this design

  20. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  1. A highly sensitive and specific assay for vertebrate collagenase

    International Nuclear Information System (INIS)

    Sodek, J.; Hurum, S.; Feng, J.

    1981-01-01

    A highly sensitive and specific assay for vertebrate collagenase has been developed using a [ 14 C]-labeled collagen substrate and a combination of SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) and fluorography to identify and quantitate the digestion products. The assay was sufficiently sensitive to permit the detection and quantitation of collagenase activity in 0.1 μl of gingival sulcal fluid, and in samples of cell culture medium without prior concentration. The assay has also been used to detect the presence of inhibitors of collagenolytic enzymes in various cell culture fluids. (author)

  2. High Sensitivity, Wearable, Piezoresistive Pressure Sensors Based on Irregular Microhump Structures and Its Applications in Body Motion Sensing.

    Science.gov (United States)

    Wang, Zongrong; Wang, Shan; Zeng, Jifang; Ren, Xiaochen; Chee, Adrian J Y; Yiu, Billy Y S; Chung, Wai Choi; Yang, Yong; Yu, Alfred C H; Roberts, Robert C; Tsang, Anderson C O; Chow, Kwok Wing; Chan, Paddy K L

    2016-07-01

    A pressure sensor based on irregular microhump patterns has been proposed and developed. The devices show high sensitivity and broad operating pressure regime while comparing with regular micropattern devices. Finite element analysis (FEA) is utilized to confirm the sensing mechanism and predict the performance of the pressure sensor based on the microhump structures. Silicon carbide sandpaper is employed as the mold to develop polydimethylsiloxane (PDMS) microhump patterns with various sizes. The active layer of the piezoresistive pressure sensor is developed by spin coating PSS on top of the patterned PDMS. The devices show an averaged sensitivity as high as 851 kPa(-1) , broad operating pressure range (20 kPa), low operating power (100 nW), and fast response speed (6.7 kHz). Owing to their flexible properties, the devices are applied to human body motion sensing and radial artery pulse. These flexible high sensitivity devices show great potential in the next generation of smart sensors for robotics, real-time health monitoring, and biomedical applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Sensitivity Study on Analysis of Reactor Containment Response to LOCA

    International Nuclear Information System (INIS)

    Chung, Ku Young; Sung, Key Yong

    2010-01-01

    As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code

  4. Sensitivity Study on Analysis of Reactor Containment Response to LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ku Young; Sung, Key Yong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2010-10-15

    As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code

  5. A performance test of a new high-surface-quality and high-sensitivity CR-39 plastic nuclear track detector – TechnoTrak

    Energy Technology Data Exchange (ETDEWEB)

    Kodaira, S., E-mail: kodaira.satoshi@qst.go.jp [Radiation Measurement Research Team, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba (Japan); Morishige, K. [Research Institute for Science and Engineering, Waseda University, Tokyo (Japan); Kawashima, H.; Kitamura, H.; Kurano, M. [Radiation Measurement Research Team, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba (Japan); Hasebe, N. [Research Institute for Science and Engineering, Waseda University, Tokyo (Japan); Koguchi, Y.; Shinozaki, W. [Oarai Research Center, Chiyoda Technol Corporation, Ibaraki (Japan); Ogura, K. [College of Industrial Technology, Nihon University, Chiba (Japan)

    2016-09-15

    We have studied the performance of a newly-commercialized CR-39 plastic nuclear track detector (PNTD), “TechnoTrak”, in energetic heavy ion measurements. The advantages of TechnoTrak are derived from its use of a purified CR-39 monomer to improve surface quality combined with an antioxidant to improve sensitivity to low-linear-energy-transfer (LET) particles. We irradiated these detectors with various heavy ions (from protons to krypton) with various energies (30–500 MeV/u) at the heavy ion accelerator facilities in the National Institute of Radiological Sciences (NIRS). The surface roughness after chemical etching was improved to be 59% of that of the conventional high-sensitivity CR-39 detector (HARZLAS/TD-1). The detectable dynamic range of LET was found to be 3.5–600 keV/μm. The LET and charge resolutions for three ions tested ranged from 5.1% to 1.5% and 0.14 to 0.22 c.u. (charge unit), respectively, in the LET range of 17–230 keV/μm, which represents an improvement over conventional products (HARZLAS/TD-1 and BARYOTRAK). A correction factor for the angular dependence was determined for correcting the LET spectrum in an isotropic radiation field. We have demonstrated the potential of TechnoTrak, with its two key features of high surface quality and high sensitivity to low-LET particles, to improve automatic analysis protocols in radiation dosimetry and various other radiological applications.

  6. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  7. [Tourism function zoning of Jinyintan Grassland Scenic Area in Qinghai Province based on ecological sensitivity analysis].

    Science.gov (United States)

    Zhong, Lin-sheng; Tang, Cheng-cai; Guo, Hua

    2010-07-01

    Based on the statistical data of natural ecology and social economy in Jinyintan Grassland Scenic Area in Qinghai Province in 2008, an evaluation index system for the ecological sensitivity of this area was established from the aspects of protected area rank, vegetation type, slope, and land use type. The ecological sensitivity of the sub-areas with higher tourism value and ecological function in the area was evaluated, and the tourism function zoning of these sub-areas was made by the technology of GIS and according to the analysis of eco-environmental characteristics and ecological sensitivity of each sensitive sub-area. It was suggested that the Jinyintan Grassland Scenic Area could be divided into three ecological sensitivity sub-areas (high, moderate, and low), three tourism functional sub-areas (restricted development ecotourism, moderate development ecotourism, and mass tourism), and six tourism functional sub-areas (wetland protection, primitive ecological sightseeing, agriculture and pasture tourism, grassland tourism, town tourism, and rural tourism).

  8. Resting serum concentration of high-sensitivity C-reactive protein ...

    African Journals Online (AJOL)

    Resting serum concentration of high-sensitivity C-reactive protein (hs-CRP) in sportsmen and untrained male adults. F.A. Niyi-Odumosu, O. A. Bello, S.A. Biliaminu, B.V. Owoyele, T.O. Abu, O.L. Dominic ...

  9. ZnO nanorod biosensor for highly sensitive detection of specific protein binding

    International Nuclear Information System (INIS)

    Kim, Jin Suk; Park, Won Il; Lee, Chul Ho; Yi, Gyu Chul

    2006-01-01

    We report on the fabrication of electrical biosensors based on functionalized ZnO nanorod surfaces with biotin for highly sensitive detection of biological molecules. Due to the clean interface and easy surface modification, the ZnO nanorod sensors can easily detect streptavidin binding down to a concentration of 25 nM, which is more sensitive than previously reported one-dimensional (1D) nanostructure electrical biosensors. In addition, the unique device structure with a micrometer-scale hole at the center of the ZnO nanorod's conducting channel reduces the leakage current from the aqueous solution, hence enhancing device sensitivity. Moreover, ZnO nanorod field-effect-transistor (FET) sensors may open up opportunities to create many other oxide nanorod electrical sensors for highly sensitive and selective real-time detection of a wide variety of biomolecules.

  10. High Throughput Measurement of Locomotor Sensitization to Volatilized Cocaine in Drosophila melanogaster.

    Science.gov (United States)

    Filošević, Ana; Al-Samarai, Sabina; Andretić Waldowski, Rozi

    2018-01-01

    Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila . We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC) to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per) , Clock (Clk) , and cycle (cyc) . The locomotor sensitization that is present in timeless (tim) and pigment dispersing factor (pdf) mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor sensitization

  11. High Throughput Measurement of Locomotor Sensitization to Volatilized Cocaine in Drosophila melanogaster

    Directory of Open Access Journals (Sweden)

    Ana Filošević

    2018-02-01

    Full Text Available Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila. We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per, Clock (Clk, and cycle (cyc. The locomotor sensitization that is present in timeless (tim and pigment dispersing factor (pdf mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor

  12. Sensitivity analysis overlaps of friction elements in cartridge seals

    Directory of Open Access Journals (Sweden)

    Žmindák Milan

    2018-01-01

    Full Text Available Cartridge seals are self-contained units consisting of a shaft sleeve, seals, and gland plate. The applications of mechanical seals are numerous. The most common example of application is in bearing production for automobile industry. This paper deals with the sensitivity analysis of overlaps friction elements in cartridge seal and their influence on the friction torque sealing and compressive force. Furthermore, it describes materials for the manufacture of sealings, approaches usually used to solution of hyperelastic materials by FEM and short introduction into the topic wheel bearings. The practical part contains one of the approach for measurement friction torque, which results were used to specifying the methodology and precision of FEM calculation realized by software ANSYS WORKBENCH. This part also contains the sensitivity analysis of overlaps friction elements.

  13. High-resolution high-sensitivity elemental imaging by secondary ion mass spectrometry: from traditional 2D and 3D imaging to correlative microscopy

    International Nuclear Information System (INIS)

    Wirtz, T; Philipp, P; Audinot, J-N; Dowsett, D; Eswara, S

    2015-01-01

    Secondary ion mass spectrometry (SIMS) constitutes an extremely sensitive technique for imaging surfaces in 2D and 3D. Apart from its excellent sensitivity and high lateral resolution (50 nm on state-of-the-art SIMS instruments), advantages of SIMS include high dynamic range and the ability to differentiate between isotopes. This paper first reviews the underlying principles of SIMS as well as the performance and applications of 2D and 3D SIMS elemental imaging. The prospects for further improving the capabilities of SIMS imaging are discussed. The lateral resolution in SIMS imaging when using the microprobe mode is limited by (i) the ion probe size, which is dependent on the brightness of the primary ion source, the quality of the optics of the primary ion column and the electric fields in the near sample region used to extract secondary ions; (ii) the sensitivity of the analysis as a reasonable secondary ion signal, which must be detected from very tiny voxel sizes and thus from a very limited number of sputtered atoms; and (iii) the physical dimensions of the collision cascade determining the origin of the sputtered ions with respect to the impact site of the incident primary ion probe. One interesting prospect is the use of SIMS-based correlative microscopy. In this approach SIMS is combined with various high-resolution microscopy techniques, so that elemental/chemical information at the highest sensitivity can be obtained with SIMS, while excellent spatial resolution is provided by overlaying the SIMS images with high-resolution images obtained by these microscopy techniques. Examples of this approach are given by presenting in situ combinations of SIMS with transmission electron microscopy (TEM), helium ion microscopy (HIM) and scanning probe microscopy (SPM). (paper)

  14. The Effect of a Diet Moderately High in Protein and Fiber on Insulin Sensitivity Measured Using the Dynamic Insulin Sensitivity and Secretion Test (DISST

    Directory of Open Access Journals (Sweden)

    Lisa Te Morenga

    2017-11-01

    Full Text Available Evidence shows that weight loss improves insulin sensitivity but few studies have examined the effect of macronutrient composition independently of weight loss on direct measures of insulin sensitivity. We randomised 89 overweight or obese women to either a standard diet (StdD, that was intended to be low in fat and relatively high in carbohydrate (n = 42 or to a relatively high protein (up to 30% of energy, relatively high fibre (>30 g/day diet (HPHFib (n = 47 for 10 weeks. Advice regarding strict adherence to energy intake goals was not given. Insulin sensitivity and secretion was assessed by a novel method—the Dynamic Insulin Sensitivity and Secretion Test (DISST. Although there were significant improvements in body composition and most cardiometabolic risk factors on HPHFib, insulin sensitivity was reduced by 19.3% (95% CI: 31.8%, 4.5%; p = 0.013 in comparison with StdD. We conclude that the reduction in insulin sensitivity after a diet relatively high in both protein and fibre, despite cardiometabolic improvements, suggests insulin sensitivity may reflect metabolic adaptations to dietary composition for maintenance of glucose homeostasis, rather than impaired metabolism.

  15. Detection of somatic mutations by high-resolution DNA melting (HRM) analysis in multiple cancers.

    Science.gov (United States)

    Gonzalez-Bosquet, Jesus; Calcei, Jacob; Wei, Jun S; Garcia-Closas, Montserrat; Sherman, Mark E; Hewitt, Stephen; Vockley, Joseph; Lissowska, Jolanta; Yang, Hannah P; Khan, Javed; Chanock, Stephen

    2011-01-17

    Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM) curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each). HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples.

  16. Detection of somatic mutations by high-resolution DNA melting (HRM analysis in multiple cancers.

    Directory of Open Access Journals (Sweden)

    Jesus Gonzalez-Bosquet

    Full Text Available Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each. HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples.

  17. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf; Kodzius, Rimantas; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Bajic, Vladimir B.

    2013-01-01

    from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly

  18. Restructuring of burnup sensitivity analysis code system by using an object-oriented design approach

    International Nuclear Information System (INIS)

    Kenji, Yokoyama; Makoto, Ishikawa; Masahiro, Tatsumi; Hideaki, Hyoudou

    2005-01-01

    A new burnup sensitivity analysis code system was developed with help from the object-oriented technique and written in Python language. It was confirmed that they are powerful to support complex numerical calculation procedure such as reactor burnup sensitivity analysis. The new burnup sensitivity analysis code system PSAGEP was restructured from a complicated old code system and reborn as a user-friendly code system which can calculate the sensitivity coefficients of the nuclear characteristics considering multicycle burnup effect based on the generalized perturbation theory (GPT). A new encapsulation framework for conventional codes written in Fortran was developed. This framework supported to restructure the software architecture of the old code system by hiding implementation details and allowed users of the new code system to easily calculate the burnup sensitivity coefficients. The framework can be applied to the other development projects since it is carefully designed to be independent from PSAGEP. Numerical results of the burnup sensitivity coefficient of a typical fast breeder reactor were given with components based on GPT and the multicycle burnup effects on the sensitivity coefficient were discussed. (authors)

  19. Sensitivity analysis of longitudinal cracking on asphalt pavement using MEPDG in permafrost region

    Directory of Open Access Journals (Sweden)

    Chen Zhang

    2015-02-01

    Full Text Available Longitudinal cracking is one of the most important distresses of asphalt pavement in permafrost regions. The sensitivity analysis of design parameters for asphalt pavement can be used to study the influence of every parameter on longitudinal cracking, which can help optimizing the design of the pavement structure. In this study, 20 test sections of Qinghai–Tibet Highway were selected to conduct the sensitivity analysis of longitudinal cracking on material parameter based on Mechanistic-Empirical Pavement Design Guide (MEPDG and single factorial sensitivity analysis method. Some computer aided engineering (CAE simulation techniques, such as the Latin hypercube sampling (LHS technique and the multiple regression analysis are used as auxiliary means. Finally, the sensitivity spectrum of material parameter on longitudinal cracking was established. The result shows the multiple regression analysis can be used to determine the remarkable influence factor more efficiently and to process the qualitative analysis when applying the MEPDG software in sensitivity analysis of longitudinal cracking in permafrost regions. The effect weights of the three parameters on longitudinal cracking in descending order are air void, effective binder content and PG grade. The influence of air void on top layer is bigger than that on middle layer and bottom layer. The influence of effective asphalt content on top layer is bigger than that on middle layer and bottom layer, and the influence of bottom layer is slightly bigger than middle layer. The accumulated value of longitudinal cracking on middle layer and bottom layer in the design life would begin to increase when the design temperature of PG grade increased.

  20. Highly sensitive dendrimer-based nanoplasmonic biosensor for drug allergy diagnosis.

    Science.gov (United States)

    Soler, Maria; Mesa-Antunez, Pablo; Estevez, M-Carmen; Ruiz-Sanchez, Antonio Jesus; Otte, Marinus A; Sepulveda, Borja; Collado, Daniel; Mayorga, Cristobalina; Torres, Maria Jose; Perez-Inestrosa, Ezequiel; Lechuga, Laura M

    2015-04-15

    A label-free biosensing strategy for amoxicillin (AX) allergy diagnosis based on the combination of novel dendrimer-based conjugates and a recently developed nanoplasmonic sensor technology is reported. Gold nanodisks were functionalized with a custom-designed thiol-ending-polyamido-based dendron (d-BAPAD) peripherally decorated with amoxicilloyl (AXO) groups (d-BAPAD-AXO) in order to detect specific IgE generated in patient's serum against this antibiotic during an allergy outbreak. This innovative strategy, which follows a simple one-step immobilization procedure, shows exceptional results in terms of sensitivity and robustness, leading to a highly-reproducible and long-term stable surface which allows achieving extremely low limits of detection. Moreover, the viability of this biosensor approach to analyze human biological samples has been demonstrated by directly analyzing and quantifying specific anti-AX antibodies in patient's serum without any sample pretreatment. An excellent limit of detection (LoD) of 0.6ng/mL (i.e. 0.25kU/L) has been achieved in the evaluation of clinical samples evidencing the potential of our nanoplasmonic biosensor as an advanced diagnostic tool to quickly identify allergic patients. The results have been compared and validated with a conventional clinical immunofluorescence assay (ImmunoCAP test), confirming an excellent correlation between both techniques. The combination of a novel compact nanoplasmonic platform and a dendrimer-based strategy provides a highly sensitive label free biosensor approach with over two times better detectability than conventional SPR. Both the biosensor device and the carrier structure hold great potential in clinical diagnosis for biomarker analysis in whole serum samples and other human biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.