WorldWideScience

Sample records for high sensitive analysis

  1. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  2. High sensitivity analysis of atmospheric gas elements

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it

  3. High sensitivity analysis of atmospheric gas elements

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.

  4. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  5. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  6. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  7. High order effects in cross section sensitivity analysis

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  8. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  9. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  10. Sensitivity analysis of recovery efficiency in high-temperature aquifer thermal energy storage with single well

    International Nuclear Information System (INIS)

    Jeon, Jun-Seo; Lee, Seung-Rae; Pasquinelli, Lisa; Fabricius, Ida Lykke

    2015-01-01

    High-temperature aquifer thermal energy storage system usually shows higher performance than other borehole thermal energy storage systems. Although there is a limitation in the widespread use of the HT-ATES system because of several technical problems such as clogging, corrosion, etc., it is getting more attention as these issues are gradually alleviated. In this study, a sensitivity analysis of recovery efficiency in two cases of HT-ATES system with a single well is conducted to select key parameters. For a fractional factorial design used to choose input parameters with uniformity, the optimal Latin hypercube sampling with an enhanced stochastic evolutionary algorithm is considered. Then, the recovery efficiency is obtained using a computer model developed by COMSOL Multiphysics. With input and output variables, the surrogate modeling technique, namely the Gaussian-Kriging method with Smoothly Clopped Absolute Deviation Penalty, is utilized. Finally, the sensitivity analysis is performed based on the variation decomposition. According to the result of sensitivity analysis, the most important input variables are selected and confirmed to consider the interaction effects for each case and it is confirmed that key parameters vary with the experiment domain of hydraulic and thermal properties as well as the number of input variables. - Highlights: • Main and interaction effects on recovery efficiency in HT-ATES was investigated. • Reliability depended on fractional factorial design and interaction effects. • Hydraulic permeability of aquifer had an important impact on recovery efficiency. • Site-specific sensitivity analysis of HT-ATES was recommended.

  11. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  12. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    Science.gov (United States)

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  13. Analysis of Cyberbullying Sensitivity Levels of High School Students and Their Perceived Social Support Levels

    Science.gov (United States)

    Akturk, Ahmet Oguz

    2015-01-01

    Purpose: The purpose of this paper is to determine the cyberbullying sensitivity levels of high school students and their perceived social supports levels, and analyze the variables that predict cyberbullying sensitivity. In addition, whether cyberbullying sensitivity levels and social support levels differed according to gender was also…

  14. Global Sensitivity Analysis of High Speed Shaft Subsystem of a Wind Turbine Drive Train

    Directory of Open Access Journals (Sweden)

    Saeed Asadi

    2018-01-01

    Full Text Available The wind turbine dynamics are complex and critical area of study for the wind industry. Quantification of the effective factors to wind turbine performance is valuable for making improvements to both power performance and turbine health. In this paper, the global sensitivity analysis of validated mathematical model for high speed shaft drive train test rig has been developed in order to evaluate the contribution of systems input parameters to the specified objective functions. The drive train in this study consists of a 3-phase induction motor, flexible shafts, shafts’ coupling, bearing housing, and disk with an eccentric mass. The governing equations were derived by using the Lagrangian formalism and were solved numerically by Newmark method. The variance based global sensitivity indices are introduced to evaluate the contribution of input structural parameters correlated to the objective functions. The conclusion from the current research provides informative beneficial data in terms of design and optimization of a drive train setup and also can provide better understanding of wind turbine drive train system dynamics with respect to different structural parameters, ultimately designing more efficient drive trains. Finally, the proposed global sensitivity analysis (GSA methodology demonstrates the detectability of faults in different components.

  15. Project Title: Radiochemical Analysis by High Sensitivity Dual-Optic Micro X-ray Fluorescence

    International Nuclear Information System (INIS)

    Havrilla, George J.; Gao, Ning

    2002-01-01

    A novel dual-optic micro X-ray fluorescence instrument will be developed to do radiochemical analysis of high-level radioactive wastes at DOE sites such as Savannah River Site and Hanford. This concept incorporates new X-ray optical elements such as monolithic polycapillaries and double bent crystals, which focus X-rays. The polycapillary optic can be used to focus X-rays emitted by the X-ray tube thereby increasing the X-ray flux on the sample over 1000 times. Polycapillaries will also be used to collect the X-rays from the excitation site and screen the radiation background from the radioactive species in the specimen. This dual-optic approach significantly reduces the background and increases the analyte signal thereby increasing the sensitivity of the analysis. A doubly bent crystal used as the focusing optic produces focused monochromatic X-ray excitation, which eliminates the bremsstrahlung background from the X-ray source. The coupling of the doubly bent crystal for monochromatic excitation with a polycapillary for signal collection can effectively eliminate the noise background and radiation background from the specimen. The integration of these X-ray optics increases the signal-to-noise and thereby increases the sensitivity of the analysis for low-level analytes. This work will address a key need for radiochemical analysis of high-level waste using a non-destructive, multi-element, and rapid method in a radiation environment. There is significant potential that this instrumentation could be capable of on-line analysis for process waste stream characterization at DOE sites

  16. Resonance analysis of a high temperature piezoelectric disc for sensitivity characterization.

    Science.gov (United States)

    Bilgunde, Prathamesh N; Bond, Leonard J

    2018-07-01

    Ultrasonic transducers for high temperature (200 °C+) applications are a key enabling technology for advanced nuclear power systems and in a range of chemical and petro-chemical industries. Design, fabrication and optimization of such transducers using piezoelectric materials remains a challenge. In this work, experimental data-based analysis is performed to investigate the fundamental causal factors for the resonance characteristics of a piezoelectric disc at elevated temperatures. The effect of all ten temperature-dependent piezoelectric constants (ε 33 , ε 11 , d 33 , d 31 , d 15 , s 11 , s 12 , s 13 , s 33 , s 44 ) is studied numerically on both the radial and thickness mode resonances of a piezoelectric disc. A sensitivity index is defined to quantify the effect of each of the temperature-dependent coefficients on the resonance modes of the modified lead zirconium titanate disc. The temperature dependence of s 33 showed highest sensitivity towards the thickness resonance mode followed by ε 33 , s 11 , s 13 , s 12 , d 31 , d 33 , s 44 , ε 11 , and d 15 in the decreasing order of the sensitivity index. For radial resonance modes, the temperature dependence of ε 33 showed highest sensitivity index followed by s 11 , s 12 and d 31 coefficient. This numerical study demonstrates that the magnitude of d 33 is not the sole factor that affects the resonance characteristics of the piezoelectric disc at high temperatures. It appears that there exists a complex interplay between various temperature dependent piezoelectric coefficients that causes reduction in the thickness mode resonance frequencies which is found to be agreement in with the experimental data at an elevated temperature. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Sensitivity analysis of recovery efficiency in high-temperature aquifer thermal energy storage with single well

    DEFF Research Database (Denmark)

    Jeon, Jun-Seo; Lee, Seung-Rae; Pasquinelli, Lisa

    2015-01-01

    ., it is getting more attention as these issues are gradually alleviated. In this study, a sensitivity analysis of recovery efficiency in two cases of HT-ATES system with a single well is conducted to select key parameters. For a fractional factorial design used to choose input parameters with uniformity...... with Smoothly Clopped Absolute Deviation Penalty, is utilized. Finally, the sensitivity analysis is performed based on the variation decomposition. According to the result of sensitivity analysis, the most important input variables are selected and confirmed to consider the interaction effects for each case...

  18. Application of graphene for preconcentration and highly sensitive stripping voltammetric analysis of organophosphate pesticide

    Energy Technology Data Exchange (ETDEWEB)

    Wu Shuo, E-mail: wushuo@dlut.edu.cn [School of Chemistry, Dalian University of Technology, Dalian 116023 (China); Lan Xiaoqin; Cui Lijun; Zhang Lihui; Tao Shengyang; Wang Hainan; Han Mei; Liu Zhiguang; Meng Changgong [School of Chemistry, Dalian University of Technology, Dalian 116023 (China)

    2011-08-12

    Highlights: {yields} An electrochemical sensor is fabricated based on {beta}-CD dispersed graphene. {yields} The sensor could selectively detect organophosphate pesticide with high sensitivity. {yields} The {beta}-CD dispersed graphene owns large adsorption capacity for MP and superconductivity. {yields} The {beta}-CD dispersed graphene is superior to most of the porous sorbents ever known. - Abstract: Electrochemical reduced {beta}-cyclodextrin dispersed graphene ({beta}-CD-graphene) was developed as a sorbent for the preconcentration and electrochemical sensing of methyl parathion (MP), a representative nitroaromatic organophosphate pesticide with good redox activity. Benefited from the ultra-large surface area, large delocalized {pi}-electron system and the superconductivity of {beta}-CD-graphene, large amount of MP could be extracted on {beta}-CD-graphene modified electrode via strong {pi}-{pi} interaction and exhibited fast accumulation and electron transfer rate. Combined with differential pulse voltammetric analysis, the sensor shows ultra-high sensitivity, good selectivity and fast response. The limit of detection of 0.05 ppb is more than 10 times lower than those obtained from other sorbent based sensors. The method may open up a new possibility for the widespread use of electrochemical sensors for monitoring of ultra-trace OPs.

  19. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  20. Methodology of safety assessment and sensitivity analysis for geologic disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Kimura, Hideo; Takahashi, Tomoyuki; Shima, Shigeki; Matsuzuru, Hideo

    1995-01-01

    A deterministic safety assessment methodology has been developed to evaluate long-term radiological consequences associated with geologic disposal of high-level radioactive waste, and to demonstrate a generic feasibility of geologic disposal. An exposure scenario considered here is based on a normal evolution scenario which excludes events attributable to probabilistic alterations in the environment. A computer code system GSRW thus developed is based on a non site-specific model, and consists of a set of sub-modules for calculating the release of radionuclides from engineered barriers, the transport of radionuclides in and through the geosphere, the behavior of radionuclides in the biosphere, and radiation exposures of the public. In order to identify the important parameters of the assessment models, an automated procedure for sensitivity analysis based on the Differential Algebra method has been developed to apply to the GSRW. (author)

  1. High-Sensitivity Spectrophotometry.

    Science.gov (United States)

    Harris, T. D.

    1982-01-01

    Selected high-sensitivity spectrophotometric methods are examined, and comparisons are made of their relative strengths and weaknesses and the circumstances for which each can best be applied. Methods include long path cells, noise reduction, laser intracavity absorption, thermocouple calorimetry, photoacoustic methods, and thermo-optical methods.…

  2. Highly Sensitive Optical Receivers

    CERN Document Server

    Schneider, Kerstin

    2006-01-01

    Highly Sensitive Optical Receivers primarily treats the circuit design of optical receivers with external photodiodes. Continuous-mode and burst-mode receivers are compared. The monograph first summarizes the basics of III/V photodetectors, transistor and noise models, bit-error rate, sensitivity and analog circuit design, thus enabling readers to understand the circuits described in the main part of the book. In order to cover the topic comprehensively, detailed descriptions of receivers for optical data communication in general and, in particular, optical burst-mode receivers in deep-sub-µm CMOS are presented. Numerous detailed and elaborate illustrations facilitate better understanding.

  3. A highly stable and sensitive chemically modified screen-printed electrode for sulfide analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, D.-M. [Department of Chemistry, National Chung Hsing University, 250 Kuo-Kuang Road, Taichung 40217, Taiwan (China); Kumar, Annamalai Senthil [Department of Chemistry, National Chung Hsing University, 250 Kuo-Kuang Road, Taichung 40217, Taiwan (China); Zen, J.-M. [Department of Chemistry, National Chung Hsing University, 250 Kuo-Kuang Road, Taichung 40217, Taiwan (China)]. E-mail: jmzen@dragon.nchu.edu.tw

    2006-01-18

    We report here a highly stable and sensitive chemically modified screen-printed carbon electrode (CMSPE) for sulfide analysis. The CMSPE was prepared by first ion-exchanging ferricyanide into a Tosflex anion-exchange polymer and then sealing with a tetraethyl orthosilicate sol-gel layer. The sol-gel overlayer coating was crucial to stabilize the electron mediator (i.e., Fe(China){sub 6} {sup 3-}) from leaching. The strong interaction between the oxy-hydroxy functional group of sol-gel and the hydrophilic sites of Tosflex makes the composite highly rigid to trap the ferricyanide mediator. An obvious electrocatalytic sulfide oxidation current signal at {approx}0.20 V versus Ag/AgCl in pH 7 phosphate buffer solution was observed at the CMSPE. A linear calibration plot over a wide range of 0.1 {mu}M to 1 mM with a slope of 5.6 nA/{mu}M was obtained by flow injection analysis. The detection limit (S/N = 3) was 8.9 nM (i.e., 25.6 ppt). Practical utility of the system was applied to the determination of sulfide trapped from cigarette smoke and sulfide content in hot spring water.

  4. High sensitivity and high resolution element 3D analysis by a combined SIMS–SPM instrument

    Directory of Open Access Journals (Sweden)

    Yves Fleming

    2015-04-01

    Full Text Available Using the recently developed SIMS–SPM prototype, secondary ion mass spectrometry (SIMS data was combined with topographical data from the scanning probe microscopy (SPM module for five test structures in order to obtain accurate chemical 3D maps: a polystyrene/polyvinylpyrrolidone (PS/PVP polymer blend, a nickel-based super-alloy, a titanium carbonitride-based cermet, a reticle test structure and Mg(OH2 nanoclusters incorporated inside a polymer matrix. The examples illustrate the potential of this combined approach to track and eliminate artefacts related to inhomogeneities of the sputter rates (caused by samples containing various materials, different phases or having a non-flat surface and inhomogeneities of the secondary ion extraction efficiencies due to local field distortions (caused by topography with high aspect ratios. In this respect, this paper presents the measured relative sputter rates between PVP and PS as well as in between the different phases of the TiCN cermet.

  5. High sensitivity neutron activation analysis of environmental and biological standard reference materials

    International Nuclear Information System (INIS)

    Greenberg, R.R.; Fleming, R.F.; Zeisler, R.

    1984-01-01

    Neutron activation analysis is a sensitive method with unique capabilities for the analysis of environmental and biological samples. Since it is based upon the nuclear properties of the elements, it does not suffer from many of the chemical effects that plague other methods of analysis. Analyses can be performed either with no chemical treatment of the sample (instrumentally), or with separations of the elements of interest after neutron irradiation (radiochemically). Typical examples of both types of analysis are discussed, and data obtained for a number of environmental and biological SRMs are presented. (author)

  6. Sensitive rapid analysis of iodine-labelled protein mixture on flat substrates with high spatial resolution

    International Nuclear Information System (INIS)

    Zanevskij, Yu.V.; Ivanov, A.B.; Movchan, S.A.; Peshekhonov, V.D.; Chan Dyk Tkhan'; Chernenko, S.P.; Kaminir, L.B.; Krejndlin, Eh.Ya.; Chernyj, A.A.

    1983-01-01

    Usability of rapid analysis by electrophoresis of the admixture of I 125 -labelled proteins on flat samples by means of URAN type installation developed using a multiwire proportional chamber is studied. The sensitivity of the method is better than 200 cpm/cm 2 and the spatial resolution is approximately 1 mm. The procedure of the rapid analysis is no longer than several tens of minutes

  7. Measurement system for high-sensitivity LIBS analysis using ICCD camera in LabVIEW environment

    International Nuclear Information System (INIS)

    Zaytsev, S M; Popov, A M; Zorov, N B; Labutin, T A

    2014-01-01

    A measurement system based on ultrafast (up to 10 ns time resolution) intensified CCD detector ''Nanogate-2V'' (Nanoscan, Russia) was developed for high-sensitivity analysis by Laser-Induced Breakdown Spectrometry (LIBS). LabVIEW environment provided a high level of compatibility with variety of electronic instruments and an easy development of user interface, while Visual Studio environment was used for creation of LabVIEW compatible dll library with the use of ''Nanogate-2V'' SDK. The program for camera management and laser-induced plasma spectra registration was created with the use of Call Library Node in LabVIEW. An algorithm of integration of the second device ADC ''PCI-9812'' (ADLINK) to the measurement system was proposed and successfully implemented. This allowed simultaneous registration of emission and acoustic signals under laser ablation. The measured resolving power of spectrometer-ICCD system was equal to 12000 at 632 nm. An electron density of laser plasma was estimated with the use of H-α Balmer line. Steel spectra obtained at different delays were used for selection of the optimal conditions for manganese analytical signal registration. The feature of accumulation of spectra from several laser pulses was shown. The accumulation allowed reliable observation of silver signal at 328.07 nm in the LIBS spectra of soil (C Ag = 4.5 ppm). Finally, the correlation between acoustic and emission signals of plasma was found. Thus, technical possibilities of the developed LIBS system were demonstrated both for plasma diagnostics and analytical measurements

  8. Uncertainty and sensitivity analysis in performance assessment for the proposed high-level radioactive waste repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Helton, Jon C.; Hansen, Clifford W.; Sallaberry, Cédric J.

    2012-01-01

    Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, a detailed performance assessment (PA) for the YM repository was completed in 2008 and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository. The following aspects of the 2008 YM PA are described in this presentation: (i) conceptual structure and computational organization, (ii) uncertainty and sensitivity analysis techniques in use, (iii) uncertainty and sensitivity analysis for physical processes, and (iv) uncertainty and sensitivity analysis for expected dose to the reasonably maximally exposed individual (RMEI) specified the NRC’s regulations for the YM repository. - Highlights: ► An overview of performance assessment for the proposed Yucca Mountain radioactive waste repository is presented. ► Conceptual structure and computational organization are described. ► Uncertainty and sensitivity analysis techniques are described. ► Uncertainty and sensitivity analysis results for physical processes are presented. ► Uncertainty and sensitivity analysis results for expected dose are presented.

  9. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  10. Highly Sensitive and High-Throughput Method for the Analysis of Bisphenol Analogues and Their Halogenated Derivatives in Breast Milk.

    Science.gov (United States)

    Niu, Yumin; Wang, Bin; Zhao, Yunfeng; Zhang, Jing; Shao, Bing

    2017-12-06

    The structural analogs of bisphenol A (BPA) and their halogenated derivatives (together termed BPs) have been found in the environment, food, and even the human body. Limited research showed that some of them exhibited toxicities that were similar to or even greater than that of BPA. Therefore, adverse health effects for BPs were expected for humans with low-dose exposure in early life. Breast milk is an excellent matrix and could reflect fetuses' and babies' exposure to contaminants. Some of the emerging BPs may present with trace or ultratrace levels in humans. However, existing analytical methods for breast milk cannot quantify these BPs simultaneously with high sensitivity using a small sampling weight, which is important for human biomonitoring studies. In this paper, a method based on Bond Elut Enhanced Matrix Removal-Lipid purification, pyridine-3-sulfonyl chloride derivatization, and liquid chromatography electrospray tandem mass spectrometry was developed. The method requires only a small quantity of sample (200 μL) and allowed for the simultaneous determination of 24 BPs in breast milk with ultrahigh sensitivity. The limits of quantitation of the proposed method were 0.001-0.200 μg L -1 , which were 1-6.7 times lower than the only study for the simultaneous analysis of bisphenol analogs in breast milk based on a 3 g sample weight. The mean recoveries ranged from 86.11% to 119.05% with relative standard deviation (RSD) ≤ 19.5% (n = 6). Matrix effects were within 20% with RSD bisphenol F (BPF), bisphenol S (BPS), and bisphenol AF (BPAF) were detected. BPA was still the dominant BP, followed by BPF. This is the first report describing the occurrence of BPF and BPAF in breast milk.

  11. Estimate of the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model: a sensitivity analysis

    International Nuclear Information System (INIS)

    Guerrieri, A.

    2009-01-01

    In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it

  12. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  13. An UPLC-MS/MS method for highly sensitive high-throughput analysis of phytohormones in plant tissues

    Directory of Open Access Journals (Sweden)

    Balcke Gerd Ulrich

    2012-11-01

    Full Text Available Abstract Background Phytohormones are the key metabolites participating in the regulation of multiple functions of plant organism. Among them, jasmonates, as well as abscisic and salicylic acids are responsible for triggering and modulating plant reactions targeted against pathogens and herbivores, as well as resistance to abiotic stress (drought, UV-irradiation and mechanical wounding. These factors induce dramatic changes in phytohormone biosynthesis and transport leading to rapid local and systemic stress responses. Understanding of underlying mechanisms is of principle interest for scientists working in various areas of plant biology. However, highly sensitive, precise and high-throughput methods for quantification of these phytohormones in small samples of plant tissues are still missing. Results Here we present an LC-MS/MS method for fast and highly sensitive determination of jasmonates, abscisic and salicylic acids. A single-step sample preparation procedure based on mixed-mode solid phase extraction was efficiently combined with essential improvements in mobile phase composition yielding higher efficiency of chromatographic separation and MS-sensitivity. This strategy resulted in dramatic increase in overall sensitivity, allowing successful determination of phytohormones in small (less than 50 mg of fresh weight tissue samples. The method was completely validated in terms of analyte recovery, sensitivity, linearity and precision. Additionally, it was cross-validated with a well-established GC-MS-based procedure and its applicability to a variety of plant species and organs was verified. Conclusion The method can be applied for the analyses of target phytohormones in small tissue samples obtained from any plant species and/or plant part relying on any commercially available (even less sensitive tandem mass spectrometry instrumentation.

  14. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  15. Sensitivity Analysis of Expected Wind Extremes over the Northwestern Sahara and High Atlas Region.

    Science.gov (United States)

    Garcia-Bustamante, E.; González-Rouco, F. J.; Navarro, J.

    2017-12-01

    A robust statistical framework in the scientific literature allows for the estimation of probabilities of occurrence of severe wind speeds and wind gusts, but does not prevent however from large uncertainties associated with the particular numerical estimates. An analysis of such uncertainties is thus required. A large portion of this uncertainty arises from the fact that historical observations are inherently shorter that the timescales of interest for the analysis of return periods. Additional uncertainties stem from the different choices of probability distributions and other aspects related to methodological issues or physical processes involved. The present study is focused on historical observations over the Ouarzazate Valley (Morocco) and in a high-resolution regional simulation of the wind in the area of interest. The aim is to provide extreme wind speed and wind gust return values and confidence ranges based on a systematic sampling of the uncertainty space for return periods up to 120 years.

  16. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  17. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  18. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  19. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    Science.gov (United States)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  20. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  1. EV range sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)

    2010-07-01

    This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.

  2. Aroma analysis and quality control of food using highly sensitive analytical methods

    International Nuclear Information System (INIS)

    Mayr, D.

    2003-02-01

    This thesis deals with the development of quality control methods for food based on headspace measurements by Proton-Transfer-Reaction Mass-Spectrometry (PTR-MS) and with aroma analysis of food using PTR-MS and Gas Chromatography-Olfactometry (GC-O). An objective method was developed for the determination of a herb extract's quality; this quality was checked by a sensory analysis until now. The concentrations of the volatile organic compounds (VOCs) in the headspace of 81 different batches were measured by PTR-MS. Based on the sensory judgment of the customer, characteristic differences in the emissions of 'good' and 'bad' quality samples were identified and a method for the quality control of this herb extract was developed. This novel method enables the producing company to check and ensure that they are only selling high-quality products and therefore avoid complaints of the customer. Furthermore this method can be used for controlling, optimizing and automating the production process. VOCs emitted by meat were investigated using PTR-MS to develop a rapid, non-destructive and quantitative technique for determination of the microbial contamination of meat. Meat samples (beef, pork and poultry) that were wrapped into different kinds of packages (air and vacuum) were stored in at 4 o C for up to 13 days. The emitted VOCs were measured as a function of storage time and identified partly. The concentration of many of the measured VOCs, e.g. sulfur compounds like methanethiol, dimethylsulfide and dimethyldisulfide, largely increased over the storage time. There were big differences in the emissions of normal air- and vacuum-packed meat. VOCs typically emitted by air-packaged meat were methanethiol, dimethylsulfide and dimethyldisulfide, while ethanol and methanol were found in vacuum-packaged meat. A comparison of the PTR-MS results with those obtained by a bacteriological examination performed at the same time showed strong correlations (up to 99 %) between the

  3. Sensitivity Analysis of the Influence of Structural Parameters on Dynamic Behaviour of Highly Redundant Cable-Stayed Bridges

    Directory of Open Access Journals (Sweden)

    B. Asgari

    2013-01-01

    Full Text Available The model tuning through sensitivity analysis is a prominent procedure to assess the structural behavior and dynamic characteristics of cable-stayed bridges. Most of the previous sensitivity-based model tuning methods are automatic iterative processes; however, the results of recent studies show that the most reasonable results are achievable by applying the manual methods to update the analytical model of cable-stayed bridges. This paper presents a model updating algorithm for highly redundant cable-stayed bridges that can be used as an iterative manual procedure. The updating parameters are selected through the sensitivity analysis which helps to better understand the structural behavior of the bridge. The finite element model of Tatara Bridge is considered for the numerical studies. The results of the simulations indicate the efficiency and applicability of the presented manual tuning method for updating the finite element model of cable-stayed bridges. The new aspects regarding effective material and structural parameters and model tuning procedure presented in this paper will be useful for analyzing and model updating of cable-stayed bridges.

  4. Radioimmunoassay (RIA), a highly specific, extremely sensitive quantitative method of analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strecker, H; Hachmann, H; Seidel, L [Farbwerke Hoechst A.G., Frankfurt am Main (Germany, F.R.). Radiochemisches Lab.

    1979-02-01

    Radioimmunoassay is an analytical method combining the sensitivity of radioactivity measurements and the specificity of the antigen-antibody-reaction. Thus, substances can be measured in concentrations as low as picograms per milliliter serum besides a millionfold excess of otherwise disturbing material (for example in serum). The method is simple to perform and is at present mainly used in the field of endocrinology. Further areas of possible application are in the diagnosis of infectious disease, drug research, environmental protection, forensic medicine as well as general analytics. Quantities of radioactivity, exclusively used in vitro, are in the nano-Curie range. Therefore the radiation dose is negligible.

  5. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    Science.gov (United States)

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  6. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  7. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    Science.gov (United States)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  8. Methods of high-sensitive analysis of actinides in liquid radioactive waste

    International Nuclear Information System (INIS)

    Diakov, Alexandre A.; Perekhozheva, Tatiana N.; Zlokazova, Elena I.

    2002-01-01

    A complex of methods has been developed to determine actinides in liquid radioactive wastes for solving the problems of radiation, nuclear and ecological safety of nuclear reactors. The main method is based on the radiochemical separation of U, Np-Pu, Am-Cm on ion-exchange and extraction columns. An identification of radionuclides and determination of their content are performed using alpha-spectrometry. The microconcentrations of the sum of the main fissile materials U-235 and Pu-239 are determined with the usage of plastic track detectors. An independent method of U-238 content determination is the neutron activation analysis. Am-241 content is possible to determine with gamma-spectrometry. (author)

  9. Sample performance assessment of a high-level radioactive waste repository: sensitivity analysis

    International Nuclear Information System (INIS)

    Tkaczyk, A.

    2001-01-01

    The Yucca Mountain Project (YMP) is the USA's first attempt at long-term storage of High-Level Radioactive Waste (HLW). In theory, the reasoning for such a repository seems sound. In practice, there are many scenarios and cases to be considered while putting such a project into effect. Since a goal of YMP is to minimize dangers associated with long-term storage of HLW, it is important to estimate the dose rate to which current and future generations will be subjected. The lifetime of the repository is simulated to indicate the radiation dose rate to the maximally exposed individual; it is assumed that if the maximally exposed individual would not be harmed by the annual dose, the remaining population will be at even smaller risk. The determination of what levels of exposure can be deemed harmless is a concern, and the results from the simulations as compared against various regulations are discussed. (author)

  10. Sample performance assessment of a high-level radioactive waste repository: sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tkaczyk, A. [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Mechanical Engineering

    2001-07-01

    The Yucca Mountain Project (YMP) is the USA's first attempt at long-term storage of High-Level Radioactive Waste (HLW). In theory, the reasoning for such a repository seems sound. In practice, there are many scenarios and cases to be considered while putting such a project into effect. Since a goal of YMP is to minimize dangers associated with long-term storage of HLW, it is important to estimate the dose rate to which current and future generations will be subjected. The lifetime of the repository is simulated to indicate the radiation dose rate to the maximally exposed individual; it is assumed that if the maximally exposed individual would not be harmed by the annual dose, the remaining population will be at even smaller risk. The determination of what levels of exposure can be deemed harmless is a concern, and the results from the simulations as compared against various regulations are discussed. (author)

  11. A Highly Sensitive Multicommuted Flow Analysis Procedure for Photometric Determination of Molybdenum in Plant Materials without a Solvent Extraction Step

    Directory of Open Access Journals (Sweden)

    Felisberto G. Santos

    2017-01-01

    Full Text Available A highly sensitive analytical procedure for photometric determination of molybdenum in plant materials was developed and validated. This procedure is based on the reaction of Mo(V with thiocyanate ions (SCN− in acidic medium to form a compound that can be monitored at 474 nm and was implemented employing a multicommuted flow analysis setup. Photometric detection was performed using an LED-based photometer coupled to a flow cell with a long optical path length (200 mm to achieve high sensitivity, allowing Mo(V determination at a level of μg L−1 without the use of an organic solvent extraction step. After optimization of operational conditions, samples of digested plant materials were analyzed employing the proposed procedure. The accuracy was assessed by comparing the obtained results with those of a reference method, with an agreement observed at 95% confidence level. In addition, a detection limit of 9.1 μg L−1, a linear response (r=0.9969 over the concentration range of 50–500 μg L−1, generation of only 3.75 mL of waste per determination, and a sampling rate of 51 determinations per hour were achieved.

  12. Analysis of 10 metabolites of polymethoxyflavones with high sensitivity by electrochemical detection in high-performance liquid chromatography.

    Science.gov (United States)

    Zheng, Jinkai; Bi, Jinfeng; Johnson, David; Sun, Yue; Song, Mingyue; Qiu, Peiju; Dong, Ping; Decker, Eric; Xiao, Hang

    2015-01-21

    Polymethoxyflavones (PMFs) have been known as a type of bioactive flavones that possess various beneficial biological functions. Accumulating evidence demonstrated that the metabolites of PMFs, that is, hydroxyl PMFs (OH-PMFs), had more potent beneficial biological effects than their corresponding parent PMFs. To facilitate the further identification and quantification of OH-PMFs in biological samples, the aim of this study was to develop a methodology for the simultaneous determination of 10 OH-PMFs using high-performance liquid chromatography (HPLC) coupled with electrochemistry detection. The HPLC profiles of these 10 OH-PMFs affected by different chromatographic parameters (different organic composition in mobile phases, the concentration of trifluoroacetic acid, and the concentration of ammonium acetate) are fully discussed in this study. The optimal condition was selected for the following validation studies. The linearity of calibration curves, accuracy, and precision (intra- and interday) at three concentration levels (low, middle, and high concentration range) were verified. The regression equations were linear (r > 0.9992) over the range of 0.005-10 μM. The limit of detection for 10 OH-PMFs was in the range of 0.8-3.7 ng/mL (S/N = 3, 10 μL injection). The recovery rates ranged from 86.6 to 108.7%. The precisions of intraday and interday analyses were less than 7.37 and 8.63% for relative standard deviation, respectively. This validated method was applied for the analysis of a variety of samples containing OH-PMFs. This paper also gives an example of analyzing the metabolites of nobiletin in mouse urine using the developed method. The transformation from nobiletin to traces of 5-hydroxyl metabolites has been discovered by this effective method, and this is the first paper to report such an association.

  13. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  14. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  15. Development of the high-order decoupled direct method in three dimensions for particulate matter: enabling advanced sensitivity analysis in air quality models

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2012-03-01

    Full Text Available The high-order decoupled direct method in three dimensions for particulate matter (HDDM-3D/PM has been implemented in the Community Multiscale Air Quality (CMAQ model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity analysis of ISORROPIA, the inorganic aerosol module of CMAQ. A case-specific approach has been applied, and the sensitivities of activity coefficients and water content are explicitly computed. Stand-alone tests are performed for ISORROPIA by comparing the sensitivities (first- and second-order computed by HDDM and the brute force (BF approximations. Similar comparison has also been carried out for CMAQ sensitivities simulated using a week-long winter episode for a continental US domain. Second-order sensitivities of aerosol species (e.g., sulfate, nitrate, and ammonium with respect to domain-wide SO2, NOx, and NH3 emissions show agreement with BF results, yet exhibit less noise in locations where BF results are demonstrably inaccurate. Second-order sensitivity analysis elucidates poorly understood nonlinear responses of secondary inorganic aerosols to their precursors and competing species. Adding second-order sensitivity terms to the Taylor series projection of the nitrate concentrations with a 50% reduction in domain-wide NOx or SO2 emissions rates improves the prediction with statistical significance.

  16. Evaluation of treatment effects for high-performance dye-sensitized solar cells using equivalent circuit analysis

    International Nuclear Information System (INIS)

    Murayama, Masaki; Mori, Tatsuo

    2006-01-01

    Equivalent circuit analysis using a one-diode model was carried out as a simpler, more convenient method to evaluate the electric mechanism and to employ effective treatment of a dye-sensitized solar cell (DSC). Cells treated using acetic acid or 4,t-butylpyridine were measured under irradiation (0.1 W/m 2 , AM 1.5) to obtain current-voltage (I-V) curves. Cell performance and equivalent circuit parameters were calculated from the I-V curves. Evaluation based on residual factors was useful for better fitting of the equivalent circuit to the I-V curve. The diode factor value was often over two for high-performance DSCs. Acetic acid treatment was effective to increase the short-circuit current by decreasing the series resistance of cells. In contrast, 4,t-butylpyridine was effective to increase open-circuit voltage by increasing the cell shunt resistance. Previous explanations considered that acetic acid worked to decrease the internal resistance of the TiO 2 layer and butylpyridine worked to lower the back-electron-transfer from the TiO 2 to the electrolyte

  17. Analysis of leachability for a sandstone uranium deposite with high acid consumption and sensitivities in Inner Mongolia

    International Nuclear Information System (INIS)

    Cheng Wei; Miao Aisheng; Li Jianhua; Zhou Lei; Chang Jingtao

    2014-01-01

    In-situ Leaching adaptability of a ground water oxidation zone type sandstone uranium deposit from Inner Mongolia is studied. The ore of the uranium deposit has high acid consumption and sensitivities in in-situ leaching. The leaching process with agent of CO_2 + O_2 and adjusting concentration of HCO_3"- can be suitable for the deposit. (authors)

  18. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  19. Folded cladding porous shaped photonic crystal fiber with high sensitivity in optical sensing applications: Design and analysis

    Directory of Open Access Journals (Sweden)

    Bikash Kumar Paul

    2017-02-01

    Full Text Available A micro structure folded cladding porous shaped with circular air hole photonic crystal fiber (FP-PCF is proposed and numerically investigated in a broader wavelength range from 1.4 µm to 1.64 µm (E+S+C+L+U for chemical sensing purposes. Employing finite element method (FEM with anisotropic perfectly matched layer (PML various properties of the proposed FP-PCF are numerically inquired. Filling the hole of core with aqueous analyte ethanol (n = 1.354 and tuning different geometric parameters of the fiber, the sensitivity order of 64.19% and the confinement loss of 2.07 × 10-5 dB/m are attained at 1.48 µm wavelength in S band. The investigated numerical simulation result strongly focuses on sensing purposes; because this fiber attained higher sensitivity with lower confinement loss over the operating wavelength. Measuring time of sensitivity, simultaneously confinement loss also inquired. It reflects that confinement loss is highly dependable on PML depth but not for sensitivity. Beside above properties numerical aperture (NA, nonlinearity, and effective area are also computed. This FP-PCF also performed as sensor for other alcohol series (methanol, propanol, butanol, pentanol. Optimized FP-PCF shows higher sensitivity and low confinement loss carrying high impact in the area of chemical as well as gas sensing purposes. Surely it is clear that install such type of sensor will flourish technology massively.         Keywords: Confinement loss, Effective area, Index guiding FP-PCF, Numerical aperture, Nonlinear coefficient, Sensitivity

  20. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  1. Highly sensitive analysis of boron and lithium in aqueous solution using dual-pulse laser-induced breakdown spectroscopy.

    Science.gov (United States)

    Lee, Dong-Hyoung; Han, Sol-Chan; Kim, Tae-Hyeong; Yun, Jong-Il

    2011-12-15

    We have applied a dual-pulse laser-induced breakdown spectroscopy (DP-LIBS) to sensitively detect concentrations of boron and lithium in aqueous solution. Sequential laser pulses from two separate Q-switched Nd:YAG lasers at 532 nm wavelength have been employed to generate laser-induced plasma on a water jet. For achieving sensitive elemental detection, the optimal timing between two laser pulses was investigated. The optimum time delay between two laser pulses for the B atomic emission lines was found to be less than 3 μs and approximately 10 μs for the Li atomic emission line. Under these optimized conditions, the detection limit was attained in the range of 0.8 ppm for boron and 0.8 ppb for lithium. In particular, the sensitivity for detecting boron by excitation of laminar liquid jet was found to be excellent by nearly 2 orders of magnitude compared with 80 ppm reported in the literature. These sensitivities of laser-induced breakdown spectroscopy are very practical for the online elemental analysis of boric acid and lithium hydroxide serving as neutron absorber and pH controller in the primary coolant water of pressurized water reactors, respectively.

  2. High resolution, high sensitivity imaging and analysis of minerals and inclusions (fluid and melt) using the new CSIRO-GEMOC nuclear microprobe

    International Nuclear Information System (INIS)

    Ryan, C.G.; McInnes, B.M.; Van Achterbergh, E.; Williams, P.J.; Dong, G.; Zaw, K.

    1999-01-01

    Full text: The new CSIRO-GEMOC Nuclear Microprobe (NMP) The instrument was designed specifically for minerals analysis and imaging and to achieve ppm to sub-ppm sensitivity at a spatial resolution of 1-2 μm using X-rays and y-rays induced by MeV energy ion beams. The key feature of the design is a unique magnetic quadrupole quintuplet ion focussing system that combines high current with high spatial resolution (Ryan et al., 1999). These design goals have been achieved or exceeded. On the first day of operation, a spot-size of 1.3 μm was obtained at a beam current of 0.5 nA, suitable for fluid inclusion analysis and imaging. The spot-size grows to just 1.8 μm at 10 nA (3 MeV protons), ideal for mineralogical samples with detection limits down to 0.2 ppm achieved in quantitative, high resolution, trace element images. Applications of the NMP include: research into ore deposit processes through trace element geochemistry, mineralogy and fluid inclusion analysis of ancient deposits and active sea-floor environments, ore characterization, and fundamental studies of mantle processes and extraterrestrial material. Quantitative True Elemental Imaging Dynamic Analysis is a method for projecting quantitative major and trace element images from proton-induced X-ray emission (PIXE) data obtained using the NMP (Ryan et al., 1995). The method un-mixes full elemental spectral signatures to produce quantitative images that can be directly interrogated for the concentrations of all elements in selected areas or line projections, etc. Fluid Inclusion Analysis and Imaging The analysis of fluids trapped as fluid inclusions in minerals holds the key to understanding ore metal pathways and ore formation processes. PIXE analysis using the NMP provides a direct non-destructive method to determine the composition of these trapped fluids with detection limits down to 20 ppm. However, some PIXE results have been controversial, such as the strong partitioning of Cu into the vapour phase (e

  3. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  4. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  5. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  6. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  7. Sensitivity analysis on mechanical stability of the underground excavations for an high-level radioactive waste repository

    International Nuclear Information System (INIS)

    Park, Jeong Hwa; Kwon, Sang Ki; Choi, Jong Won; Kang, Chul Hyung

    2001-01-01

    For the safe design of an underground nuclear waste repository, it is necessary to investigate the influence of the major parameters on the tunnel stability. In this study, sensitivity analysis was carried out to find the major parameters on the mechanical stability point of view. Fourteen parameters consisted of 10 site parameters and 4 design parameters were included in the FLAC3D. From the numerical analyses employing single parameter variation, it was possible to determine important parameters. In order to investigate the interaction between the parameters, fractional factorial design for the parameters, such as in situ stress ratio, depth, tunnel dimensions, joint spacing, joint stiffness, friction angle, and rock strength, was carried out. And in order to investigate the interaction between design parameters, fractional factorial design for parameters, such as in situ stress, depth, tunnel size, tunnel spacing and borehole spacing, was carried out

  8. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  9. Analysis of ultra-high sensitivity configuration in chip-integrated photonic crystal microcavity bio-sensors

    Energy Technology Data Exchange (ETDEWEB)

    Chakravarty, Swapnajit, E-mail: swapnajit.chakravarty@omegaoptics.com; Hosseini, Amir; Xu, Xiaochuan [Omega Optics, Inc., Austin, Texas 78757 (United States); Zhu, Liang; Zou, Yi [Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, Texas 78758 (United States); Chen, Ray T., E-mail: raychen@uts.cc.utexas.edu [Omega Optics, Inc., Austin, Texas 78757 (United States); Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, Texas 78758 (United States)

    2014-05-12

    We analyze the contributions of quality factor, fill fraction, and group index of chip-integrated resonance microcavity devices, to the detection limit for bulk chemical sensing and the minimum detectable biomolecule concentration in biosensing. We analyze the contributions from analyte absorbance, as well as from temperature and spectral noise. Slow light in two-dimensional photonic crystals provide opportunities for significant reduction of the detection limit below 1 × 10{sup −7} RIU (refractive index unit) which can enable highly sensitive sensors in diverse application areas. We demonstrate experimentally detected concentration of 1 fM (67 fg/ml) for the binding between biotin and avidin, the lowest reported till date.

  10. Analysis of ultra-high sensitivity configuration in chip-integrated photonic crystal microcavity bio-sensors

    International Nuclear Information System (INIS)

    Chakravarty, Swapnajit; Hosseini, Amir; Xu, Xiaochuan; Zhu, Liang; Zou, Yi; Chen, Ray T.

    2014-01-01

    We analyze the contributions of quality factor, fill fraction, and group index of chip-integrated resonance microcavity devices, to the detection limit for bulk chemical sensing and the minimum detectable biomolecule concentration in biosensing. We analyze the contributions from analyte absorbance, as well as from temperature and spectral noise. Slow light in two-dimensional photonic crystals provide opportunities for significant reduction of the detection limit below 1 × 10 −7 RIU (refractive index unit) which can enable highly sensitive sensors in diverse application areas. We demonstrate experimentally detected concentration of 1 fM (67 fg/ml) for the binding between biotin and avidin, the lowest reported till date

  11. Sensitivity analysis of high resolution gamma-ray detection for safeguards monitoring at natural uranium conversion facilities

    Energy Technology Data Exchange (ETDEWEB)

    Dewji, S.A., E-mail: dewjisa@ornl.gov [Oak Ridge National Laboratory, PO Box 2008 MS-6335, Oak Ridge TN 37831 (United States); Georgia Institute of Technology, 770 State Street, Atlanta, GA 30332-0745 (United States); Croft, S. [Oak Ridge National Laboratory, PO Box 2008 MS-6335, Oak Ridge TN 37831 (United States); Hertel, N.E. [Oak Ridge National Laboratory, PO Box 2008 MS-6335, Oak Ridge TN 37831 (United States); Georgia Institute of Technology, 770 State Street, Atlanta, GA 30332-0745 (United States)

    2017-03-11

    Under the policies proposed by recent International Atomic Energy Agency (IAEA) circulars and policy papers, implementation of safeguards exists when any purified aqueous uranium solution or uranium oxides suitable for isotopic enrichment or fuel fabrication exists. Under IAEA Policy Paper 18, the starting point for nuclear material under safeguards was reinterpreted, suggesting that purified uranium compounds should be subject to safeguards procedures no later than the first point in the conversion process. In response to this technical need, a combination of simulation models and experimental measurements were employed in previous work to develop and validate gamma-ray nondestructive assay monitoring systems in a natural uranium conversion plant (NUCP). In particular, uranyl nitrate (UO{sub 2}(NO{sub 3}){sub 2}) solution exiting solvent extraction was identified as a key measurement point (KMP). Passive nondestructive assay techniques using high resolution gamma-ray spectroscopy were evaluated to determine their viability as a technical means for drawing safeguards conclusions at NUCPs, and if the IAEA detection requirements of 1 significant quantity (SQ) can be met in a timely manner. Building upon the aforementioned previous validation work on detector sensitivity to varying concentrations of uranyl nitrate via a series of dilution measurements, this work investigates detector response parameter sensitivities to gamma-ray signatures of uranyl nitrate. The full energy peak efficiency of a detection system is dependent upon the sample, geometry, absorption, and intrinsic efficiency parameters. Perturbation of these parameters translates into corresponding variations of the 185.7 keV peak area of the {sup 235}U in uranyl nitrate. Such perturbations in the assayed signature impact the quality or versatility of the safeguards conclusions drawn. Given the potentially high throughput of uranyl nitrate in NUCPs, the ability to assay 1 SQ of material requires

  12. Fiscal 2000 pioneering research on the research on high-sensitivity passive measurement/analysis technologies; 2000 nendo kokando passive keisoku bunseki gijutsu no chosa sendo kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The above-named research was brought over from the preceding fiscal year. Needs for passive measurement were investigated, and it was found that what are named below were interested in passive measurement. Wanting passive measurement technology were the analysis of organic matters on semiconductor wafers, analysis of dangerous substances in wastes, measurement of substances in the life space causing allergy to chemical substances, measurement of constituents of gas emitted by organisms for example through expiration, measurement for automatic sorting of plastic wastes, 2-dimensional spectrometry for medical treatment of organisms, and so forth. In the survey of seeds, various novel technologies were investigated in the fields of optical systems, sensors, and signal processing. The outcomes of the survey indicated that high-sensitivity measurement and analysis of spectral images, measurement and analysis of trace quantities in he fields of medical treatment, environmental matters, and semiconductors would be feasible by the use of newly developed technologies involving the interference array type 2-dimensional modulation/demodulation device, 2-dimensional high-sensitivity infrared sensor, high-sensitivity systematization technology, mixed signal separation technology capable of suppressing noise and background light, and technology for increasing processing speeds. (NEDO)

  13. The development of a high performance liquid chromatograph with a sensitive on-stream radioactivity monitor for the analysis of 3H- and 14C-labelled gibberellins

    International Nuclear Information System (INIS)

    Reeve, D.R.; Yokota, T.; Nash, L.; Crozier, A.

    1976-01-01

    The development of a high performance liquid chromatograph for the separation of gibberellins is described. The system combines high efficiency, peak capacity, and sample capacity with rapid speed of analysis. In addition, the construction details of a sensitive on-stream radioactivity monitor are outlined. The overall versatility of the chromatograph has been demonstrated by the separation of a range of 3 H- and 14 C-labelled gibberellins and gibberellin precursors. The system also has considerable potential for the analysis of abscisic acid and acidic and neutral indoles. (author)

  14. How the definition of acceptable antigens and epitope analysis can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.

    Science.gov (United States)

    Heidt, Sebastiaan; Haasnoot, Geert W; Claas, Frans H J

    2018-05-24

    Highly sensitized patients awaiting a renal transplant have a low chance of receiving an organ offer. Defining acceptable antigens and using this information for allocation purposes can vastly enhance transplantation of this subgroup of patients, which is the essence of the Eurotransplant Acceptable Mismatch program. Acceptable antigens can be determined by extensive laboratory testing, as well as on basis of human leukocyte antigen (HLA) epitope analyses. Within the Acceptable Mismatch program, there is no effect of HLA mismatches on long-term graft survival. Furthermore, patients transplanted through the Acceptable Mismatch program have similar long-term graft survival to nonsensitized patients transplanted through regular allocation. Although HLA epitope analysis is already being used for defining acceptable HLA antigens for highly sensitized patients in the Acceptable Mismatch program, increasing knowledge on HLA antibody - epitope interactions will pave the way toward the definition of acceptable epitopes for highly sensitized patients in the future. Allocation based on acceptable antigens can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.

  15. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  16. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  17. The analysis of high sensitive C-reactive protein and diabetic nephropathy in patients with type 2 diabetes mellitus

    International Nuclear Information System (INIS)

    Xu Yan

    2007-01-01

    Objective: To investigate the changes of serum high sensitive C-reactive protein (hs-CRP) in different stages of diabetic nephropathy and their clinical significance. Methods: Serum hs-CRP was measured by enzyme-linked immunosorbent assay (ELISA), U-Alb was measured by radioimmunoassay(RIA). According to their urinary albumin excretion rate(UAER), 102 patients with type 2 diabetes mellitus were divided into three groups: 40 patients with normal UAER, 32 patients with microalbuminuria and 30 patients with clinical proteinuria, and 32 healthy subjects were taken as the controls control. Results: hs-CRP concentrations were significantly higher in patients with type 2 diabetes mellitus than those in healthy controls and increased with increment of UAER and serum creatinine. Conclusions: The level of hs-CRP is correlated with the extend of diabetic nephropathy in patients with type 2 diabetic patients.The concentration of hs-CRP can in some degree serve as a predictor for diabetic rephropathy and its progression. (authors)

  18. Possibilities and limits of digital industrial radiology: the new high contrast sensitivity technique - Examples and system theoretical analysis

    International Nuclear Information System (INIS)

    Zscherpel, U.; Ewert, U.; Bavendiek, K.

    2007-01-01

    During the last years more and more reports about film replacement techniques are published using different ways to prove the required and obtained image quality. The motivation is usually cost reduction due to shorter exposure times and lower storage costs, smaller space requirements and elimination of chemical processing inclusive associated waste handling and disposal. There are no other publications known, which explore the upper limits of image quality achievable by the new digital techniques. This is important for inspection of safety relevant and high risk parts, as e.g. in nuclear or aerospace industries. A new calibration and measurement procedure for digital detector arrays (DDA) was explored to obtain the maximum signal/noise ratio achievable with DDAs. This procedure yields a contrast sensitivity which allows distinguishing wall thickness changes of up to 1/1000 of the penetrated material thickness. Standard film radiography using NDT film systems (with and without lead screens) achieves a wall thickness contrast which is not better than 1/100 even with the best film system class (class 'C1' according to EN 584-1 or 'special' according to ASTM E 1815). Computed Radiography (CR) using phosphor imaging plates is a true film replacement technique without enhancement of the image quality compared to NDT film systems. The comparison is based on parameter studies which measure signal/noise ratios and determine the basic spatial resolution as well as a comparison of radiological images with fine flaws. (authors)

  19. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  20. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    Directory of Open Access Journals (Sweden)

    Karolina Chwialkowska

    2017-11-01

    Full Text Available Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq. We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation

  1. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  2. High sensitivity isotope analysis with a 252Cf--235U fueled subcritical multiplier and low background photon detector systems

    International Nuclear Information System (INIS)

    Wogman, N.A.; Rieck, H.G. Jr.; Laul, J.C.; MacMurdo, K.W.

    1976-09-01

    A 252 Cf activation analysis facility has been developed for routine multielement analysis of a wide variety of solid and liquid samples. The facility contains six sources of 252 Cf totaling slightly over 100 mg. These sources are placed in a 93 percent 235 U-enriched uranium core which is subcritical with a K effective of 0.985 (multiplication factor of 66). The system produces a thermal flux on the order of 10 +1 neutrons per square centimeter per second. A pneumatic rabbit system permits automatic irradiation, decay, and counting regimes to be performed unattended on the samples. The activated isotopes are analyzed through their photon emissions with state-of-the-art intrinsic Ge detectors, Ge(Li) detectors, and NaI(Tl) multidimensional gamma ray spectrometers. High efficiency (25 percent), low background, anticoincidence shielded Ge(Li) gamma ray detector systems have been constructed to provide the lowest possible background, yet maintain a peak to Compton ratio of greater than 1000 to 1. The multidimensional gamma ray spectrometer systems are composed of 23 cm diameter x 20 cm thick NaI(Tl) crystals surrounded by NaI(Tl) anticoincidence shields. The detection limits for over 65 elements have been determined for this system. Over 40 elements are detectable at the 1 part per million level at a precision of +-10 percent

  3. High-throughput and sensitive analysis of 3-monochloropropane-1,2-diol fatty acid esters in edible oils by supercritical fluid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Hori, Katsuhito; Matsubara, Atsuki; Uchikata, Takato; Tsumura, Kazunobu; Fukusaki, Eiichiro; Bamba, Takeshi

    2012-08-10

    We have established a high-throughput and sensitive analytical method based on supercritical fluid chromatography (SFC) coupled with triple quadrupole mass spectrometry (QqQ MS) for 3-monochloropropane-1,2-diol (3-MCPD) fatty acid esters in edible oils. All analytes were successfully separated within 9 min without sample purification. The system was precise and sensitive, with a limit of detection less than 0.063 mg/kg. The recovery rate of 3-MCPD fatty acid esters spiked into oil samples was in the range of 62.68-115.23%. Furthermore, several edible oils were tested for analyzing 3-MCPD fatty acid ester profiles. This is the first report on the analysis of 3-MCPD fatty acid esters by SFC/QqQ MS. The developed method will be a powerful tool for investigating 3-MCPD fatty acid esters in edible oils. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Development of a method for comprehensive and quantitative analysis of plant hormones by highly sensitive nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry

    International Nuclear Information System (INIS)

    Izumi, Yoshihiro; Okazawa, Atsushi; Bamba, Takeshi; Kobayashi, Akio; Fukusaki, Eiichiro

    2009-01-01

    In recent plant hormone research, there is an increased demand for a highly sensitive and comprehensive analytical approach to elucidate the hormonal signaling networks, functions, and dynamics. We have demonstrated the high sensitivity of a comprehensive and quantitative analytical method developed with nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry (LC-ESI-IT-MS/MS) under multiple-reaction monitoring (MRM) in plant hormone profiling. Unlabeled and deuterium-labeled isotopomers of four classes of plant hormones and their derivatives, auxins, cytokinins (CK), abscisic acid (ABA), and gibberellins (GA), were analyzed by this method. The optimized nanoflow-LC-ESI-IT-MS/MS method showed ca. 5-10-fold greater sensitivity than capillary-LC-ESI-IT-MS/MS, and the detection limits (S/N = 3) of several plant hormones were in the sub-fmol range. The results showed excellent linearity (R 2 values of 0.9937-1.0000) and reproducibility of elution times (relative standard deviations, RSDs, <1.1%) and peak areas (RSDs, <10.7%) for all target compounds. Further, sample purification using Oasis HLB and Oasis MCX cartridges significantly decreased the ion-suppressing effects of biological matrix as compared to the purification using only Oasis HLB cartridge. The optimized nanoflow-LC-ESI-IT-MS/MS method was successfully used to analyze endogenous plant hormones in Arabidopsis and tobacco samples. The samples used in this analysis were extracted from only 17 tobacco dry seeds (1 mg DW), indicating that the efficiency of analysis of endogenous plant hormones strongly depends on the detection sensitivity of the method. Our analytical approach will be useful for in-depth studies on complex plant hormonal metabolism.

  5. Development of a method for comprehensive and quantitative analysis of plant hormones by highly sensitive nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, Yoshihiro; Okazawa, Atsushi; Bamba, Takeshi; Kobayashi, Akio [Department of Biotechnology, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871 (Japan); Fukusaki, Eiichiro, E-mail: fukusaki@bio.eng.osaka-u.ac.jp [Department of Biotechnology, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871 (Japan)

    2009-08-26

    In recent plant hormone research, there is an increased demand for a highly sensitive and comprehensive analytical approach to elucidate the hormonal signaling networks, functions, and dynamics. We have demonstrated the high sensitivity of a comprehensive and quantitative analytical method developed with nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry (LC-ESI-IT-MS/MS) under multiple-reaction monitoring (MRM) in plant hormone profiling. Unlabeled and deuterium-labeled isotopomers of four classes of plant hormones and their derivatives, auxins, cytokinins (CK), abscisic acid (ABA), and gibberellins (GA), were analyzed by this method. The optimized nanoflow-LC-ESI-IT-MS/MS method showed ca. 5-10-fold greater sensitivity than capillary-LC-ESI-IT-MS/MS, and the detection limits (S/N = 3) of several plant hormones were in the sub-fmol range. The results showed excellent linearity (R{sup 2} values of 0.9937-1.0000) and reproducibility of elution times (relative standard deviations, RSDs, <1.1%) and peak areas (RSDs, <10.7%) for all target compounds. Further, sample purification using Oasis HLB and Oasis MCX cartridges significantly decreased the ion-suppressing effects of biological matrix as compared to the purification using only Oasis HLB cartridge. The optimized nanoflow-LC-ESI-IT-MS/MS method was successfully used to analyze endogenous plant hormones in Arabidopsis and tobacco samples. The samples used in this analysis were extracted from only 17 tobacco dry seeds (1 mg DW), indicating that the efficiency of analysis of endogenous plant hormones strongly depends on the detection sensitivity of the method. Our analytical approach will be useful for in-depth studies on complex plant hormonal metabolism.

  6. Mesoporous carbon nitride based biosensor for highly sensitive and selective analysis of phenol and catechol in compost bioremediation.

    Science.gov (United States)

    Zhou, Yaoyu; Tang, Lin; Zeng, Guangming; Chen, Jun; Cai, Ye; Zhang, Yi; Yang, Guide; Liu, Yuanyuan; Zhang, Chen; Tang, Wangwang

    2014-11-15

    Herein, we reported here a promising biosensor by taking advantage of the unique ordered mesoporous carbon nitride material (MCN) to convert the recognition information into a detectable signal with enzyme firstly, which could realize the sensitive, especially, selective detection of catechol and phenol in compost bioremediation samples. The mechanism including the MCN based on electrochemical, biosensor assembly, enzyme immobilization, and enzyme kinetics (elucidating the lower detection limit, different linear range and sensitivity) was discussed in detail. Under optimal conditions, GCE/MCN/Tyr biosensor was evaluated by chronoamperometry measurements and the reduction current of phenol and catechol was proportional to their concentration in the range of 5.00 × 10(-8)-9.50 × 10(-6)M and 5.00 × 10(-8)-1.25 × 10(-5)M with a correlation coefficient of 0.9991 and 0.9881, respectively. The detection limits of catechol and phenol were 10.24 nM and 15.00 nM (S/N=3), respectively. Besides, the data obtained from interference experiments indicated that the biosensor had good specificity. All the results showed that this material is suitable for load enzyme and applied to the biosensor due to the proposed biosensor exhibited improved analytical performances in terms of the detection limit and specificity, provided a powerful tool for rapid, sensitive, especially, selective monitoring of catechol and phenol simultaneously. Moreover, the obtained results may open the way to other MCN-enzyme applications in the environmental field. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Data fusion qualitative sensitivity analysis

    International Nuclear Information System (INIS)

    Clayton, E.A.; Lewis, R.E.

    1995-09-01

    Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables

  8. Novel charge sensitive preamplifier without high-value feedback resistor

    International Nuclear Information System (INIS)

    Xi Deming

    1992-01-01

    A novel charge sensitive preamplifier is introduced. The method of removing the high value feedback resistor, the circuit design and analysis are described. A practical circuit and its measured performances are provided

  9. Single photon detection and signal analysis for high sensitivity dosimetry based on optically stimulated luminescence with beryllium oxide

    Science.gov (United States)

    Radtke, J.; Sponner, J.; Jakobi, C.; Schneider, J.; Sommer, M.; Teichmann, T.; Ullrich, W.; Henniger, J.; Kormoll, T.

    2018-01-01

    Single photon detection applied to optically stimulated luminescence (OSL) dosimetry is a promising approach due to the low level of luminescence light and the known statistical behavior of single photon events. Time resolved detection allows to apply a variety of different and independent data analysis methods. Furthermore, using amplitude modulated stimulation impresses time- and frequency information into the OSL light and therefore allows for additional means of analysis. Considering the impressed frequency information, data analysis by using Fourier transform algorithms or other digital filters can be used for separating the OSL signal from unwanted light or events generated by other phenomena. This potentially lowers the detection limits of low dose measurements and might improve the reproducibility and stability of obtained data. In this work, an OSL system based on a single photon detector, a fast and accurate stimulation unit and an FPGA is presented. Different analysis algorithms which are applied to the single photon data are discussed.

  10. A Pricing Strategy To Promote Sales of Lower Fat Foods in High School Cafeterias: Acceptability and Sensitivity Analysis.

    Science.gov (United States)

    Hannan, Peter; French, Simone A.; Story, Mary; Fulkerson, Jayne A.

    2002-01-01

    Examined the purchase patterns of seven targeted foods under conditions in which prices of three high-fat foods were raised and prices of four low-fat foods were lowered in a high school cafeteria over 1 school year. Data collected on food sales and revenues supported the feasibility of a pricing strategy that offered low-fat foods at lower prices…

  11. Uncertainty and sensitivity analysis in the 2008 performance assessment for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.

    2010-01-01

    Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, an extensive performance assessment (PA) for the YM repository was completed in 2008 (1) and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository (2). This presentation provides an overview of the conceptual and computational structure of the indicated PA (hereafter referred to as the 2008 YM PA) and the roles that uncertainty analysis and sensitivity analysis play in this structure.

  12. Rapid, simple, and highly sensitive analysis of drugs in biological samples using thin-layer chromatography coupled with matrix-assisted laser desorption/ionization mass spectrometry.

    Science.gov (United States)

    Kuwayama, Kenji; Tsujikawa, Kenji; Miyaguchi, Hajime; Kanamori, Tatsuyuki; Iwata, Yuko T; Inoue, Hiroyuki

    2012-01-01

    Rapid and precise identification of toxic substances is necessary for urgent diagnosis and treatment of poisoning cases and for establishing the cause of death in postmortem examinations. However, identification of compounds in biological samples using gas chromatography and liquid chromatography coupled with mass spectrometry entails time-consuming and labor-intensive sample preparations. In this study, we examined a simple preparation and highly sensitive analysis of drugs in biological samples such as urine, plasma, and organs using thin-layer chromatography coupled with matrix-assisted laser desorption/ionization mass spectrometry (TLC/MALDI/MS). When the urine containing 3,4-methylenedioxymethamphetamine (MDMA) without sample dilution was spotted on a thin-layer chromatography (TLC) plate and was analyzed by TLC/MALDI/MS, the detection limit of the MDMA spot was 0.05 ng/spot. The value was the same as that in aqueous solution spotted on a stainless steel plate. All the 11 psychotropic compounds tested (MDMA, 4-hydroxy-3-methoxymethamphetamine, 3,4-methylenedioxyamphetamine, methamphetamine, p-hydroxymethamphetamine, amphetamine, ketamine, caffeine, chlorpromazine, triazolam, and morphine) on a TLC plate were detected at levels of 0.05-5 ng, and the type (layer thickness and fluorescence) of TLC plate did not affect detection sensitivity. In addition, when rat liver homogenate obtained after MDMA administration (10 mg/kg) was spotted on a TLC plate, MDMA and its main metabolites were identified using TLC/MALDI/MS, and the spots on a TLC plate were visualized by MALDI/imaging MS. The total analytical time from spotting of intact biological samples to the output of analytical results was within 30 min. TLC/MALDI/MS enabled rapid, simple, and highly sensitive analysis of drugs from intact biological samples and crude extracts. Accordingly, this method could be applied to rapid drug screening and precise identification of toxic substances in poisoning cases and

  13. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  14. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    Science.gov (United States)

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A comparison of sorptive extraction techniques coupled to a new quantitative, sensitive, high throughput GC-MS/MS method for methoxypyrazine analysis in wine.

    Science.gov (United States)

    Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E

    2016-02-01

    Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program. Copyright © 2015

  16. Phase sensitive diffraction sensor for high sensitivity refractive index measurement

    Science.gov (United States)

    Kumawat, Nityanand; Varma, Manoj; Kumar, Sunil

    2018-02-01

    In this study a diffraction based sensor has been developed for bio molecular sensing applications and performing assays in real time. A diffraction grating fabricated on a glass substrate produced diffraction patterns both in transmission and reflection when illuminated by a laser diode. We used zeroth order I(0,0) as reference and first order I(0,1) as signal channel and conducted ratiometric measurements that reduced noise by more than 50 times. The ratiometric approach resulted in a very simple instrumentation with very high sensitivity. In the past, we have shown refractive index measurements both for bulk and surface adsorption using the diffractive self-referencing approach. In the current work we extend the same concept to higher diffraction orders. We have considered order I(0,1) and I(1,1) and performed ratiometric measurements I(0,1)/I(1,1) to eliminate the common mode fluctuations. Since orders I(0,1) and I(1,1) behaved opposite to each other, the resulting ratio signal amplitude increased more than twice compared to our previous results. As a proof of concept we used different salt concentrations in DI water. Increased signal amplitude and improved fluid injection system resulted in more than 4 times improvement in detection limit, giving limit of detection 1.3×10-7 refractive index unit (RIU) compared to our previous results. The improved refractive index sensitivity will help significantly for high sensitivity label free bio sensing application in a very cost-effective and simple experimental set-up.

  17. Rapid analysis of heterogeneously methylated DNA using digital methylation-sensitive high resolution melting: application to the CDKN2B (p15) gene

    DEFF Research Database (Denmark)

    Candiloro, Ida Lm; Mikeska, Thomas; Hokland, Peter

    2008-01-01

    ABSTRACT: BACKGROUND: Methylation-sensitive high resolution melting (MS-HRM) methodology is able to recognise heterogeneously methylated sequences by their characteristic melting profiles. To further analyse heterogeneously methylated sequences, we adopted a digital approach to MS-HRM (dMS-HRM) t......ABSTRACT: BACKGROUND: Methylation-sensitive high resolution melting (MS-HRM) methodology is able to recognise heterogeneously methylated sequences by their characteristic melting profiles. To further analyse heterogeneously methylated sequences, we adopted a digital approach to MS-HRM (d......MS-HRM) that involves the amplification of single templates after limiting dilution to quantify and to determine the degree of methylation. We used this approach to study methylation of the CDKN2B (p15) cell cycle progression inhibitor gene which is inactivated by DNA methylation in haematological malignancies...... the methylated alleles and assess the degree of methylation. Direct sequencing of selected dMS-HRM products was used to determine the exact DNA methylation pattern and confirmed the degree of methylation estimated by dMS-HRM. CONCLUSION: dMS-HRM is a powerful technique for the analysis of methylation in CDKN2B...

  18. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  19. Effect of Vitamin D Supplementation on the Level of Circulating High-Sensitivity C-Reactive Protein: A Meta-Analysis of Randomized Controlled Trials

    Directory of Open Access Journals (Sweden)

    Neng Chen

    2014-06-01

    Full Text Available Vitamin D might elicit protective effects against cardiovascular disease by decreasing the level of circulating high-sensitivity C-reactive protein (hs-CRP, an inflammatory marker. Thus, we conducted a meta-analysis of randomized controlled trials to evaluate the association of vitamin D supplementation with circulating hs-CRP level. A systematic literature search was conducted in September 2013 (updated in February 2014 via PubMed, Web of Science, and Cochrane library to identify eligible studies. Either a fixed-effects or a random-effects model was used to calculate pooled effects. The results of the meta-analysis of 10 trials involving a total of 924 participants showed that vitamin D supplementation significantly decreased the circulating hs-CRP level by 1.08 mg/L (95% CI, −2.13, −0.03, with the evidence of heterogeneity. Subgroup analysis suggested a higher reduction of 2.21 mg/L (95% CI, −3.50, −0.92 among participants with baseline hs-CRP level ≥5 mg/L. Meta-regression analysis further revealed that baseline hs-CRP level, supplemental dose of vitamin D and intervention duration together may be attributed to the heterogeneity across studies. In summary, vitamin D supplementation is beneficial for the reduction of circulating hs-CRP. However, the result should be interpreted with caution because of the evidence of heterogeneity.

  20. Effect of vitamin D supplementation on the level of circulating high-sensitivity C-reactive protein: a meta-analysis of randomized controlled trials.

    Science.gov (United States)

    Chen, Neng; Wan, Zhongxiao; Han, Shu-Fen; Li, Bing-Yan; Zhang, Zeng-Li; Qin, Li-Qiang

    2014-06-10

    Vitamin D might elicit protective effects against cardiovascular disease by decreasing the level of circulating high-sensitivity C-reactive protein (hs-CRP), an inflammatory marker. Thus, we conducted a meta-analysis of randomized controlled trials to evaluate the association of vitamin D supplementation with circulating hs-CRP level. A systematic literature search was conducted in September 2013 (updated in February 2014) via PubMed, Web of Science, and Cochrane library to identify eligible studies. Either a fixed-effects or a random-effects model was used to calculate pooled effects. The results of the meta-analysis of 10 trials involving a total of 924 participants showed that vitamin D supplementation significantly decreased the circulating hs-CRP level by 1.08 mg/L (95% CI, -2.13, -0.03), with the evidence of heterogeneity. Subgroup analysis suggested a higher reduction of 2.21 mg/L (95% CI, -3.50, -0.92) among participants with baseline hs-CRP level ≥5 mg/L. Meta-regression analysis further revealed that baseline hs-CRP level, supplemental dose of vitamin D and intervention duration together may be attributed to the heterogeneity across studies. In summary, vitamin D supplementation is beneficial for the reduction of circulating hs-CRP. However, the result should be interpreted with caution because of the evidence of heterogeneity.

  1. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  2. Automatic and integrated micro-enzyme assay (AIμEA) platform for highly sensitive thrombin analysis via an engineered fluorescence protein-functionalized monolithic capillary column.

    Science.gov (United States)

    Lin, Lihua; Liu, Shengquan; Nie, Zhou; Chen, Yingzhuang; Lei, Chunyang; Wang, Zhen; Yin, Chao; Hu, Huiping; Huang, Yan; Yao, Shouzhuo

    2015-04-21

    Nowadays, large-scale screening for enzyme discovery, engineering, and drug discovery processes require simple, fast, and sensitive enzyme activity assay platforms with high integration and potential for high-throughput detection. Herein, a novel automatic and integrated micro-enzyme assay (AIμEA) platform was proposed based on a unique microreaction system fabricated by a engineered green fluorescence protein (GFP)-functionalized monolithic capillary column, with thrombin as an example. The recombinant GFP probe was rationally engineered to possess a His-tag and a substrate sequence of thrombin, which enable it to be immobilized on the monolith via metal affinity binding, and to be released after thrombin digestion. Combined with capillary electrophoresis-laser-induced fluorescence (CE-LIF), all the procedures, including thrombin injection, online enzymatic digestion in the microreaction system, and label-free detection of the released GFP, were integrated in a single electrophoretic process. By taking advantage of the ultrahigh loading capacity of the AIμEA platform and the CE automatic programming setup, one microreaction column was sufficient for many times digestion without replacement. The novel microreaction system showed significantly enhanced catalytic efficiency, about 30 fold higher than that of the equivalent bulk reaction. Accordingly, the AIμEA platform was highly sensitive with a limit of detection down to 1 pM of thrombin. Moreover, the AIμEA platform was robust and reliable to detect thrombin in human serum samples and its inhibition by hirudin. Hence, this AIμEA platform exhibits great potential for high-throughput analysis in future biological application, disease diagnostics, and drug screening.

  3. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  4. High sensitivity optical molecular imaging system

    Science.gov (United States)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  5. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  6. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  7. Introducing AAA-MS, a rapid and sensitive method for amino acid analysis using isotope dilution and high-resolution mass spectrometry.

    Science.gov (United States)

    Louwagie, Mathilde; Kieffer-Jaquinod, Sylvie; Dupierris, Véronique; Couté, Yohann; Bruley, Christophe; Garin, Jérôme; Dupuis, Alain; Jaquinod, Michel; Brun, Virginie

    2012-07-06

    Accurate quantification of pure peptides and proteins is essential for biotechnology, clinical chemistry, proteomics, and systems biology. The reference method to quantify peptides and proteins is amino acid analysis (AAA). This consists of an acidic hydrolysis followed by chromatographic separation and spectrophotometric detection of amino acids. Although widely used, this method displays some limitations, in particular the need for large amounts of starting material. Driven by the need to quantify isotope-dilution standards used for absolute quantitative proteomics, particularly stable isotope-labeled (SIL) peptides and PSAQ proteins, we developed a new AAA assay (AAA-MS). This method requires neither derivatization nor chromatographic separation of amino acids. It is based on rapid microwave-assisted acidic hydrolysis followed by high-resolution mass spectrometry analysis of amino acids. Quantification is performed by comparing MS signals from labeled amino acids (SIL peptide- and PSAQ-derived) with those of unlabeled amino acids originating from co-hydrolyzed NIST standard reference materials. For both SIL peptides and PSAQ standards, AAA-MS quantification results were consistent with classical AAA measurements. Compared to AAA assay, AAA-MS was much faster and was 100-fold more sensitive for peptide and protein quantification. Finally, thanks to the development of a labeled protein standard, we also extended AAA-MS analysis to the quantification of unlabeled proteins.

  8. Heterogeneous catalysis in highly sensitive microreactors

    DEFF Research Database (Denmark)

    Olsen, Jakob Lind

    This thesis present a highly sensitive silicon microreactor and examples of its use in studying catalysis. The experimental setup built for gas handling and temperature control for the microreactor is described. The implementation of LabVIEW interfacing for all the experimental parts makes...

  9. Aluminum nanocantilevers for high sensitivity mass sensors

    DEFF Research Database (Denmark)

    Davis, Zachary James; Boisen, Anja

    2005-01-01

    We have fabricated Al nanocantilevers using a simple, one mask contact UV lithography technique with lateral and vertical dimensions under 500 and 100 nm, respectively. These devices are demonstrated as highly sensitive mass sensors by measuring their dynamic properties. Furthermore, it is shown ...

  10. Modelling nitrous oxide emissions from mown-grass and grain-cropping systems: Testing and sensitivity analysis of DailyDayCent using high frequency measurements.

    Science.gov (United States)

    Senapati, Nimai; Chabbi, Abad; Giostri, André Faé; Yeluripati, Jagadeesh B; Smith, Pete

    2016-12-01

    The DailyDayCent biogeochemical model was used to simulate nitrous oxide (N 2 O) emissions from two contrasting agro-ecosystems viz. a mown-grassland and a grain-cropping system in France. Model performance was tested using high frequency measurements over three years; additionally a local sensitivity analysis was performed. Annual N 2 O emissions of 1.97 and 1.24kgNha -1 year -1 were simulated from mown-grassland and grain-cropland, respectively. Measured and simulated water filled pore space (r=0.86, ME=-2.5%) and soil temperature (r=0.96, ME=-0.63°C) at 10cm soil depth matched well in mown-grassland. The model predicted cumulative hay and crop production effectively. The model simulated soil mineral nitrogen (N) concentrations, particularly ammonium (NH 4 + ), reasonably, but the model significantly underestimated soil nitrate (NO 3 - ) concentration under both systems. In general, the model effectively simulated the dynamics and the magnitude of daily N 2 O flux over the whole experimental period in grain-cropland (r=0.16, ME=-0.81gNha -1 day -1 ), with reasonable agreement between measured and modelled N 2 O fluxes for the mown-grassland (r=0.63, ME=-0.65gNha -1 day -1 ). Our results indicate that DailyDayCent has potential for use as a tool for predicting overall N 2 O emissions in the study region. However, in-depth analysis shows some systematic discrepancies between measured and simulated N 2 O fluxes on a daily basis. The current exercise suggests that the DailyDayCent may need improvement, particularly the sub-module responsible for N transformations, for better simulating soil mineral N, especially soil NO 3 - concentration, and N 2 O flux on a daily basis. The sensitivity analysis shows that many factors such as climate change, N-fertilizer use, input uncertainty and parameter value could influence the simulation of N 2 O emissions. Sensitivity estimation also helped to identify critical parameters, which need careful estimation or site

  11. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  12. Single photon detector with high polarization sensitivity.

    Science.gov (United States)

    Guo, Qi; Li, Hao; You, LiXing; Zhang, WeiJun; Zhang, Lu; Wang, Zhen; Xie, XiaoMing; Qi, Ming

    2015-04-15

    Polarization is one of the key parameters of light. Most optical detectors are intensity detectors that are insensitive to the polarization of light. A superconducting nanowire single photon detector (SNSPD) is naturally sensitive to polarization due to its nanowire structure. Previous studies focused on producing a polarization-insensitive SNSPD. In this study, by adjusting the width and pitch of the nanowire, we systematically investigate the preparation of an SNSPD with high polarization sensitivity. Subsequently, an SNSPD with a system detection efficiency of 12% and a polarization extinction ratio of 22 was successfully prepared.

  13. Highly sensitive microcalorimeters for radiation research

    International Nuclear Information System (INIS)

    Avaev, V.N.; Demchuk, B.N.; Ioffe, L.A.; Efimov, E.P.

    1984-01-01

    Calorimetry is used in research at various types of nuclear-physics installations to obtain information on the quantitative and qualitative composition of ionizing radiation in a reactor core and in the surrounding layers of the biological shield. In this paper, the authors examine the characteristics of highly sensitive microcalorimeters with modular semiconductor heat pickups designed for operation in reactor channels. The microcalorimeters have a thin-walled aluminum housing on whose inner surface modular heat pickups are placed radially as shown here. The results of measurements of the temperature dependence of the sensitivity of the microcalorimeters are shown. The results of measuring the sensitivity of a PMK-2 microcalorimeter assembly as a function of integrated neutron flux for three energy intervals and the adsorbed gamma energy are shown. In order to study specimens with different shapes and sizes, microcalorimeters with chambers in the form of cylinders and a parallelepiped were built and tested

  14. High blood pressure and visual sensitivity

    Science.gov (United States)

    Eisner, Alvin; Samples, John R.

    2003-09-01

    The study had two main purposes: (1) to determine whether the foveal visual sensitivities of people treated for high blood pressure (vascular hypertension) differ from the sensitivities of people who have not been diagnosed with high blood pressure and (2) to understand how visual adaptation is related to standard measures of systemic cardiovascular function. Two groups of middle-aged subjects-hypertensive and normotensive-were examined with a series of test/background stimulus combinations. All subjects met rigorous inclusion criteria for excellent ocular health. Although the visual sensitivities of the two subject groups overlapped extensively, the age-related rate of sensitivity loss was, for some measures, greater for the hypertensive subjects, possibly because of adaptation differences between the two groups. Overall, the degree of steady-state sensitivity loss resulting from an increase of background illuminance (for 580-nm backgrounds) was slightly less for the hypertensive subjects. Among normotensive subjects, the ability of a bright (3.8-log-td), long-wavelength (640-nm) adapting background to selectively suppress the flicker response of long-wavelength-sensitive (LWS) cones was related inversely to the ratio of mean arterial blood pressure to heart rate. The degree of selective suppression was also related to heart rate alone, and there was evidence that short-term changes of cardiovascular response were important. The results suggest that (1) vascular hypertension, or possibly its treatment, subtly affects visual function even in the absence of eye disease and (2) changes in blood flow affect retinal light-adaptation processes involved in the selective suppression of the flicker response from LWS cones caused by bright, long-wavelength backgrounds.

  15. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  16. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  17. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  18. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  19. Low Cost, Low Power, High Sensitivity Magnetometer

    Science.gov (United States)

    2008-12-01

    which are used to measure the small magnetic signals from brain. Other types of vector magnetometers are fluxgate , coil based, and magnetoresistance...concentrator with the magnetometer currently used in Army multimodal sensor systems, the Brown fluxgate . One sees the MEMS fluxgate magnetometer is...Guedes, A.; et al., 2008: Hybrid - LOW COST, LOW POWER, HIGH SENSITIVITY MAGNETOMETER A.S. Edelstein*, James E. Burnette, Greg A. Fischer, M.G

  20. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    (Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...

  1. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  2. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  3. Melting curve analysis after T allele enrichment (MelcaTle as a highly sensitive and reliable method for detecting the JAK2V617F mutation.

    Directory of Open Access Journals (Sweden)

    Soji Morishita

    Full Text Available Detection of the JAK2V617F mutation is essential for diagnosing patients with classical myeloproliferative neoplasms (MPNs. However, detection of the low-frequency JAK2V617F mutation is a challenging task due to the necessity of discriminating between true-positive and false-positive results. Here, we have developed a highly sensitive and accurate assay for the detection of JAK2V617F and named it melting curve analysis after T allele enrichment (MelcaTle. MelcaTle comprises three steps: 1 two cycles of JAK2V617F allele enrichment by PCR amplification followed by BsaXI digestion, 2 selective amplification of the JAK2V617F allele in the presence of a bridged nucleic acid (BNA probe, and 3 a melting curve assay using a BODIPY-FL-labeled oligonucleotide. Using this assay, we successfully detected nearly a single copy of the JAK2V617F allele, without false-positive signals, using 10 ng of genomic DNA standard. Furthermore, MelcaTle showed no positive signals in 90 assays screening healthy individuals for JAK2V617F. When applying MelcaTle to 27 patients who were initially classified as JAK2V617F-positive on the basis of allele-specific PCR analysis and were thus suspected as having MPNs, we found that two of the patients were actually JAK2V617F-negative. A more careful clinical data analysis revealed that these two patients had developed transient erythrocytosis of unknown etiology but not polycythemia vera, a subtype of MPNs. These findings indicate that the newly developed MelcaTle assay should markedly improve the diagnosis of JAK2V617F-positive MPNs.

  4. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  5. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  6. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  7. Comprehensive RNA Analysis of the NF1 Gene in Classically Affected NF1 Affected Individuals Meeting NIH Criteria has High Sensitivity and Mutation Negative Testing is Reassuring in Isolated Cases With Pigmentary Features Only

    Directory of Open Access Journals (Sweden)

    D.G. Evans

    2016-05-01

    Interpretation: RNA analysis in individuals with presumed NF1 has high sensitivity and includes a small subset with DNET without an NF1 variant. Furthermore negative analysis for NF1/SPRED1 provides strong reassurance to children with ≥6 CAL that they are unlikely to have NF1.

  8. Highly sensitive analysis of polycyclic aromatic hydrocarbons in environmental water with porous cellulose/zeolitic imidazolate framework-8 composite microspheres as a novel adsorbent coupled with high-performance liquid chromatography.

    Science.gov (United States)

    Liang, Xiaotong; Liu, Shengquan; Zhu, Rong; Xiao, Lixia; Yao, Shouzhuo

    2016-07-01

    In this work, novel cellulose/zeolitic imidazolate frameworks-8 composite microspheres have been successfully fabricated and utilized as sorbent for environmental polycyclic aromatic hydrocarbons efficient extraction and sensitive analysis. The composite microspheres were synthesized through the in situ hydrothermal growth of zeolitic imidazolate frameworks-8 on cellulose matrix, and exhibited favorable hierarchical structure with chemical composition as assumed through scanning electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction patterns, and Brunauer-Emmett-Teller surface areas characterization. A robust and highly efficient method was then successfully developed with as-prepared composite microspheres as novel solid-phase extraction sorbent with optimum extraction conditions, such as sorbent amount, sample volume, extraction time, desorption conditions, volume of organic modifier, and ionic strength. The method exhibited high sensitivity with low limit of detection down to 0.1-1.0 ng/L and satisfactory linearity with correlation coefficients ranging from 0.9988 to 0.9999, as well as good recoveries of 66.7-121.2% with relative standard deviations less than 10% for environmental polycyclic aromatic hydrocarbons analysis. Thus, our method was convenient and efficient for polycyclic aromatic hydrocarbons extraction and detection, potential for future environmental water samples analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  10. Fast and sensitive analysis of beta blockers by ultra-high-performance liquid chromatography coupled with ultra-high-resolution TOF mass spectrometry.

    Science.gov (United States)

    Tomková, Jana; Ondra, Peter; Kocianová, Eva; Václavík, Jan

    2017-07-01

    This paper presents a method for the determination of acebutolol, betaxolol, bisoprolol, metoprolol, nebivolol and sotalol in human serum by liquid-liquid extraction and ultra-high-performance liquid chromatography coupled with ultra-high-resolution TOF mass spectrometry. After liquid-liquid extraction, beta blockers were separated on a reverse-phase analytical column (Acclaim RS 120; 100 × 2.1 mm, 2.2 μm). The total run time was 6 min for each sample. Linearity, limit of detection, limit of quantification, matrix effects, specificity, precision, accuracy, recovery and sample stability were evaluated. The method was successfully applied to the therapeutic drug monitoring of 108 patients with hypertension. This method was also used for determination of beta blockers in 33 intoxicated patients. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Review of high-sensitivity Radon studies

    Science.gov (United States)

    Wojcik, M.; Zuzel, G.; Simgen, H.

    2017-10-01

    A challenge in many present cutting-edge particle physics experiments is the stringent requirements in terms of radioactive background. In peculiar, the prevention of Radon, a radioactive noble gas, which occurs from ambient air and it is also released by emanation from the omnipresent progenitor Radium. In this paper we review various high-sensitivity Radon detection techniques and approaches, applied in the experiments looking for rare nuclear processes happening at low energies. They allow to identify, quantitatively measure and finally suppress the numerous sources of Radon in the detectors’ components and plants.

  12. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  13. Design and development of a highly sensitive, field portable plasma source instrument for on-line liquid stream monitoring and real-time sample analysis

    International Nuclear Information System (INIS)

    Duan, Yixiang; Su, Yongxuan; Jin, Zhe; Abeln, Stephen P.

    2000-01-01

    The development of a highly sensitive, field portable, low-powered instrument for on-site, real-time liquid waste stream monitoring is described in this article. A series of factors such as system sensitivity and portability, plasma source, sample introduction, desolvation system, power supply, and the instrument configuration, were carefully considered in the design of the portable instrument. A newly designed, miniature, modified microwave plasma source was selected as the emission source for spectroscopy measurement, and an integrated small spectrometer with a charge-coupled device detector was installed for signal processing and detection. An innovative beam collection system with optical fibers was designed and used for emission signal collection. Microwave plasma can be sustained with various gases at relatively low power, and it possesses high detection capabilities for both metal and nonmetal pollutants, making it desirable to use for on-site, real-time, liquid waste stream monitoring. An effective in situ sampling system was coupled with a high efficiency desolvation device for direct-sampling liquid samples into the plasma. A portable computer control system is used for data processing. The new, integrated instrument can be easily used for on-site, real-time monitoring in the field. The system possesses a series of advantages, including high sensitivity for metal and nonmetal elements; in situ sampling; compact structure; low cost; and ease of operation and handling. These advantages will significantly overcome the limitations of previous monitoring techniques and make great contributions to environmental restoration and monitoring. (c)

  14. High sensitivity troponin and valvular heart disease.

    Science.gov (United States)

    McCarthy, Cian P; Donnellan, Eoin; Phelan, Dermot; Griffin, Brian P; Enriquez-Sarano, Maurice; McEvoy, John W

    2017-07-01

    Blood-based biomarkers have been extensively studied in a range of cardiovascular diseases and have established utility in routine clinical care, most notably in the diagnosis of acute coronary syndrome (e.g., troponin) and the management of heart failure (e.g., brain-natriuretic peptide). The role of biomarkers is less well established in the management of valvular heart disease (VHD), in which the optimal timing of surgical intervention is often challenging. One promising biomarker that has been the subject of a number of recent VHD research studies is high sensitivity troponin (hs-cTn). Novel high-sensitivity assays can detect subclinical myocardial damage in asymptomatic individuals. Thus, hs-cTn may have utility in the assessment of asymptomatic patients with severe VHD who do not have a clear traditional indication for surgical intervention. In this state-of-the-art review, we examine the current evidence for hs-cTn as a potential biomarker in the most commonly encountered VHD conditions, aortic stenosis and mitral regurgitation. This review provides a synopsis of early evidence indicating that hs-cTn has promise as a biomarker in VHD. However, the impact of its measurement on clinical practice and VHD outcomes needs to be further assessed in prospective studies before routine clinical use becomes a reality. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  16. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  17. Highly sensitive high resolution Raman spectroscopy using resonant ionization methods

    International Nuclear Information System (INIS)

    Owyoung, A.; Esherick, P.

    1984-05-01

    In recent years, the introduction of stimulated Raman methods has offered orders of magnitude improvement in spectral resolving power for gas phase Raman studies. Nevertheless, the inherent weakness of the Raman process suggests the need for significantly more sensitive techniques in Raman spectroscopy. In this we describe a new approach to this problem. Our new technique, which we call ionization-detected stimulated Raman spectroscopy (IDSRS), combines high-resolution SRS with highly-sensitive resonant laser ionization to achieve an increase in sensitivity of over three orders of magnitude. The excitation/detection process involves three sequential steps: (1) population of a vibrationally excited state via stimulated Raman pumping; (2) selective ionization of the vibrationally excited molecule with a tunable uv source; and (3) collection of the ionized species at biased electrodes where they are detected as current in an external circuit

  18. Biosphere assessment for high-level radioactive waste disposal: modelling experiences and discussion on key parameters by sensitivity analysis in JNC

    International Nuclear Information System (INIS)

    Kato, Tomoko; Makino, Hitoshi; Uchida, Masahiro; Suzuki, Yuji

    2004-01-01

    In the safety assessment of the deep geological disposal system of the high-level radioactive waste (HLW), biosphere assessment is often necessary to estimate future radiological impacts on human beings (e.g. radiation dose). In order to estimate the dose, the surface environment (biosphere) into which future releases of radionuclides might occur and the associated future human behaviour needs to be considered. However, for a deep repository, such releases might not occur for many thousands of years after disposal. Over such timescales, it is impossible to predict with any certainty how the biosphere and human behaviour will evolve. To avoid endless speculation aimed at reducing such uncertainty, the 'Reference Biospheres' concept has been developed for use in the safety assessment of HLW disposal. As the aim of the safety assessment with a hypothetical HLW disposal system by JNC was to demonstrate the technical feasibility and reliability of the Japanese disposal concept for a range of geological and surface environments, some biosphere models were developed using the 'Reference Biospheres' concept and the BIOMASS Methodology. These models have been used to derive factors to convert the radionuclide flux from a geosphere to a biosphere into a dose (flux to dose conversion factors). Moreover, sensitivity analysis for parameters in the biosphere models was performed to evaluate and understand the relative importance of parameters. It was concluded that transport parameters in the surface environments, annual amount of food consumption, distribution coefficients on soils and sediments, transfer coefficients of radionuclides to animal products and concentration ratios for marine organisms would have larger influence on the flux to dose conversion factors than any other parameters. (author)

  19. Supercritical extraction of oleaginous: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Santos M.M.

    2000-01-01

    Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.

  20. A highly sensitive and specific capacitive aptasensor for rapid and label-free trace analysis of Bisphenol A (BPA) in canned foods.

    Science.gov (United States)

    Mirzajani, Hadi; Cheng, Cheng; Wu, Jayne; Chen, Jiangang; Eda, Shigotoshi; Najafi Aghdam, Esmaeil; Badri Ghavifekr, Habib

    2017-03-15

    A rapid, highly sensitive, specific and low-cost capacitive affinity biosensor is presented here for label-free and single step detection of Bisphenol A (BPA). The sensor design allows rapid prototyping at low-cost using printed circuit board material by benchtop equipment. High sensitivity detection is achieved through the use of a BPA-specific aptamer as probe molecule and large electrodes to enhance AC-electroelectrothermal effect for long-range transport of BPA molecules toward electrode surface. Capacitive sensing technique is used to determine the bounded BPA level by measuring the sample/electrode interfacial capacitance of the sensor. The developed biosensor can detect BPA level in 20s and exhibits a large linear range from 1 fM to 10 pM, with a limit of detection (LOD) of 152.93 aM. This biosensor was applied to test BPA in canned food samples and could successfully recover the levels of spiked BPA. This sensor technology is demonstrated to be highly promising and reliable for rapid, sensitive and on-site monitoring of BPA in food samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. High sensitivity and selectivity in quantitative analysis of drugs in biological samples using 4-column multidimensional micro-UHPLC-MS enabling enhanced sample loading capacity.

    Science.gov (United States)

    de Vries, Ronald; Vereyken, Liesbeth; François, Isabelle; Dillen, Lieve; Vreeken, Rob J; Cuyckens, Filip

    2017-10-09

    Sensitivity is often a critical parameter in quantitative bioanalyses in drug development. For liquid-chromatography-based methods, sensitivity can be improved by reducing the column diameter, but practical sensitivity gains are limited by the reduced sample loading capacity on small internal diameter (I.D.) columns. We developed a set-up that has overcome these limitations in sample loading capacity. The set-up uses 4 columns with gradually decreasing column diameters along the flow-path (2.1 → 1 → 0.5 → 0.15 mm). Samples are pre-concentrated on-line on a 2.1 mm I.D. trapping column and back flushed to a 1 mm I.D. UHPLC analytical column and separated. The peak(s) of interest are transferred using heartcutting to a second trapping column (0.5 mm I.D.), which is back-flushed to a 0.15 mm I.D. micro-UHPLC analytical column for orthogonal separation. The proof of concept of the set-up was demonstrated by the simultaneous analysis of midazolam and 1'-hydroxy midazolam in plasma by injection of 80 μL of protein precipitated plasma. The 4-column funnel set-up proved to be robust and resulted in a 10-50 times better sensitivity compared to a trap-elute approach and 250-500 fold better compared to direct micro-UHPLC analysis. A lower limit of quantification of 100 fg/mL in plasma was obtained for both probe compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  3. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  4. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  5. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  6. CALDER: High-sensitivity cryogenic light detectors

    International Nuclear Information System (INIS)

    Casali, N.; Bellini, F.; Cardani, L.

    2017-01-01

    The current bolometric experiments searching for rare processes such as neutrinoless double-beta decay or dark matter interaction demand for cryogenic light detectors with high sensitivity, large active area and excellent scalability and radio-purity in order to reduce their background budget. The CALDER project aims to develop such kind of light detectors implementing phonon-mediated Kinetic Inductance Detectors (KIDs). The goal for this project is the realization of a 5 × 5 cm"2 light detector working between 10 and 100mK with a baseline resolution RMS below 20 eV. In this work the characteristics and the performances of the prototype detectors developed in the first project phase will be shown.

  7. High sensitive radiation detector for radiology dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Valente, M.; Malano, F. [Instituto de Fisica Enrique Gaviola, Oficina 102 FaMAF - UNC, Av. Luis Medina Allende, Ciudad Universitaria, 5000 Cordoba (Argentina); Molina, W.; Vedelago, J., E-mail: valente@famac.unc.edu.ar [Laboratorio de Investigaciones e Instrumentacion en Fisica Aplicada a la Medicina e Imagenes por Rayos X, Laboratorio 448 FaMAF - UNC, Ciudad Universitaria, 5000 Cordoba (Argentina)

    2014-08-15

    Fricke solution has a wide range of applications as radiation detector and dosimetry. It is particularly appreciated in terms of relevant comparative advantages, like tissue equivalence when prepared in aqueous media like gel matrix, continuous mapping capability, dose rate recorded and incident direction independence as well as linear dose response. This work presents the development and characterization of a novel Fricke gel system, based on modified chemical compositions making possible its application in clinical radiology. Properties of standard Fricke gel dosimeter for high dose levels are used as starting point and suitable chemical modifications are introduced and carefully investigated in order to attain high resolution for low dose ranges, like those corresponding to radiology interventions. The developed Fricke gel radiation dosimeter system achieves the expected typical dose dependency, actually showing linear response in the dose range from 20 up to 4000 mGy. Systematic investigations including several chemical compositions are carried out in order to obtain a good enough dosimeter response for low dose levels. A suitable composition among those studied is selected as a good candidate for low dose level radiation dosimetry consisting on a modified Fricke solution fixed to a gel matrix containing benzoic acid along with sulfuric acid, ferrous sulfate, xylenol orange and ultra-pure reactive grade water. Dosimeter samples are prepared in standard vials for its in phantom irradiation and further characterization by spectrophotometry measuring visible light transmission and absorbance before and after irradiation. Samples are irradiated by typical kV X-ray tubes and calibrated Farmer type ionization chamber is used as reference to measure dose rates inside phantoms in at vials locations. Once sensitive material composition is already optimized, dose-response curves show significant improvement regarding overall sensitivity for low dose levels. According to

  8. High sensitivity amplifier/discriminator for PWC's

    International Nuclear Information System (INIS)

    Hansen, S.

    1983-01-01

    The facility support group at Fermilab is designing and building a general purpose beam chamber for use in several locations at the laboratory. This pwc has 128 wires per plane spaced 1 mm apart. An initial production of 25 signal planes is anticipated. In proportional chambers, the size of the signal depends exponentially on the charge stored per unit of length along the anode wire. As the wire spacing decreases, the capacitance per unit length decreases, thereby requiring increased applied voltage to restore the necessary charge per unit length. In practical terms, this phenomenon is responsible for difficulties in constructing chambers with less than 2 mm wire spacing. 1 mm chambers, therefore, are frequently operated very near to their breakdown point and/or a high gain gas containing organic compounds such as magic gas is used. This argon/iso-butane mixture has three drawbacks: it is explosive when exposed to the air, it leaves a residue on the wires after extended use and is costly. An amplifier with higher sensitivity would reduce the problems associated with operating chambers with small wire spacings and allow them to be run a safe margin below their breakdown voltage even with an inorganic gas mixture such as argon/CO2, this eliminating the need to use magic gas. Described here is a low cost amplifier with a usable threshold of less than 0.5 μA. Data on the performance of this amplifier/discriminator in operation on a prototype beam chamber are given. This data shows the advantages of the high sensitivity of this design

  9. High sensitive radiation detector for radiology dosimetry

    International Nuclear Information System (INIS)

    Valente, M.; Malano, F.; Molina, W.; Vedelago, J.

    2014-08-01

    Fricke solution has a wide range of applications as radiation detector and dosimetry. It is particularly appreciated in terms of relevant comparative advantages, like tissue equivalence when prepared in aqueous media like gel matrix, continuous mapping capability, dose rate recorded and incident direction independence as well as linear dose response. This work presents the development and characterization of a novel Fricke gel system, based on modified chemical compositions making possible its application in clinical radiology. Properties of standard Fricke gel dosimeter for high dose levels are used as starting point and suitable chemical modifications are introduced and carefully investigated in order to attain high resolution for low dose ranges, like those corresponding to radiology interventions. The developed Fricke gel radiation dosimeter system achieves the expected typical dose dependency, actually showing linear response in the dose range from 20 up to 4000 mGy. Systematic investigations including several chemical compositions are carried out in order to obtain a good enough dosimeter response for low dose levels. A suitable composition among those studied is selected as a good candidate for low dose level radiation dosimetry consisting on a modified Fricke solution fixed to a gel matrix containing benzoic acid along with sulfuric acid, ferrous sulfate, xylenol orange and ultra-pure reactive grade water. Dosimeter samples are prepared in standard vials for its in phantom irradiation and further characterization by spectrophotometry measuring visible light transmission and absorbance before and after irradiation. Samples are irradiated by typical kV X-ray tubes and calibrated Farmer type ionization chamber is used as reference to measure dose rates inside phantoms in at vials locations. Once sensitive material composition is already optimized, dose-response curves show significant improvement regarding overall sensitivity for low dose levels. According to

  10. Vertically Aligned Nitrogen-Doped Carbon Nanotube Carpet Electrodes: Highly Sensitive Interfaces for the Analysis of Serum from Patients with Inflammatory Bowel Disease.

    Science.gov (United States)

    Wang, Qian; Subramanian, Palaniappan; Schechter, Alex; Teblum, Eti; Yemini, Reut; Nessim, Gilbert Daniel; Vasilescu, Alina; Li, Musen; Boukherroub, Rabah; Szunerits, Sabine

    2016-04-20

    The number of patients suffering from inflammatory bowel disease (IBD) is increasing worldwide. The development of noninvasive tests that are rapid, sensitive, specific, and simple would allow preventing patient discomfort, delay in diagnosis, and the follow-up of the status of the disease. Herein, we show the interest of vertically aligned nitrogen-doped carbon nanotube (VA-NCNT) electrodes for the required sensitive electrochemical detection of lysozyme in serum, a protein that is up-regulated in IBD. To achieve selective lysozyme detection, biotinylated lysozyme aptamers were covalently immobilized onto the VA-NCNTs. Detection of lysozyme in serum was achieved by measuring the decrease in the peak current of the Fe(CN)6(3-/4-) redox couple by differential pulse voltammetry upon addition of the analyte. We achieved a detection limit as low as 100 fM with a linear range up to 7 pM, in line with the required demands for the determination of lysozyme level in patients suffering from IBD. We attained the sensitive detection of biomarkers in clinical samples of healthy patients and individuals suffering from IBD and compared the results to a classical turbidimetric assay. The results clearly indicate that the newly developed sensor allows for a reliable and efficient analysis of lysozyme in serum.

  11. Wear-Out Sensitivity Analysis Project Abstract

    Science.gov (United States)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  12. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  13. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  14. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  15. High sensitivity thermal sensors on insulating diamond

    Energy Technology Data Exchange (ETDEWEB)

    Job, R. [Fernuniversitaet Hagen (Gesamthochschule) (Germany). Electron. Devices; Denisenko, A.V. [Fernuniversitaet Hagen (Gesamthochschule) (Germany). Electron. Devices; Zaitsev, A.M. [Fernuniversitaet Hagen (Gesamthochschule) (Germany). Electron. Devices; Melnikov, A.A. [Belarussian State Univ., Minsk (Belarus). HEII and FD; Werner, M. [VDI/VDE-IT, Teltow (Germany); Fahrner, W.R. [Fernuniversitaet Hagen (Gesamthochschule) (Germany). Electron. Devices

    1996-12-15

    Diamond is a promising material to develop sensors for applications in harsh environments. To increase the sensitivity of diamond temperature sensors the effect of thermionic hole emission (TE) over an energetic barrier formed in the interface between highly boron-doped p-type and intrinsic insulating diamond areas has been suggested. To study the TE of holes a p-i-p diode has been fabricated and analyzed by electrical measurements in the temperature range between 300 K and 700 K. The experimental results have been compared with numerical simulations of its electrical characteristics. Based on a model of the thermionic emission of carriers into an insulator it has been suggested that the temperature sensitivity of the p-i-p diode on diamond is strongly affected by the re-emission of holes from a group of donor-like traps located at a level of 0.7-1.0 eV above the valence band. The mechanism of thermal activation of the current includes a spatial redistribution of the potential, which results in the TE regime from a decrease of the immobilized charge of the ionized traps within the i-zone of the diode and the correspondent lowering of the forward biased barrier. The characteristics of the p-i-p diode were studied with regard to temperature sensor applications. The temperature coefficient of resistance (TCR=-0.05 K{sup -1}) for temperatures above 600 K is about four times larger than the maximal attainable TCR for conventional boron-doped diamond resistors. (orig.)

  16. Transportable high sensitivity small sample radiometric calorimeter

    International Nuclear Information System (INIS)

    Wetzel, J.R.; Biddle, R.S.; Cordova, B.S.; Sampson, T.E.; Dye, H.R.; McDow, J.G.

    1998-01-01

    A new small-sample, high-sensitivity transportable radiometric calorimeter, which can be operated in different modes, contains an electrical calibration method, and can be used to develop secondary standards, will be described in this presentation. The data taken from preliminary tests will be presented to indicate the precision and accuracy of the instrument. The calorimeter and temperature-controlled bath, at present, require only a 30-in. by 20-in. tabletop area. The calorimeter is operated from a laptop computer system using unique measurement module capable of monitoring all necessary calorimeter signals. The calorimeter can be operated in the normal calorimeter equilibration mode, as a comparison instrument, using twin chambers and an external electrical calibration method. The sample chamber is 0.75 in (1.9 cm) in diameter by 2.5 in. (6.35 cm) long. This size will accommodate most 238 Pu heat standards manufactured in the past. The power range runs from 0.001 W to <20 W. The high end is only limited by sample size

  17. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  18. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  19. Online high sensitivity measurement system for transuranic aerosols

    International Nuclear Information System (INIS)

    Kordas, J.F.; Phelps, P.L.

    1976-01-01

    A measurement system for transuranic aerosols has been designed that will be able to withstand the corrosive nature of stack effluents and yet have extremely high sensitivity. It will be capable of measuring 1 maximum permissible concentration (MPC) of plutonium or americium in 30 minutes with a fractional standard deviation of less than 0.33. Background resulting from 218 Po is eliminated by alpha energy discrimination and a decay scheme analysis. A microprocessor controls all data acquisition, data reduction, and instrument calibration

  20. Highly sensitive detection of a current ripple

    International Nuclear Information System (INIS)

    Aoki, Takashi; Gushiken, Tutomu; Nishikigouri, Kazutaka; Kumada, Masayuki.

    1996-01-01

    In the HIMAC, there are six thyristor-controlled power sources for driving two synchrotrons. These power sources are the three-output terminal power sources which are equipped with positive output, negative output and neutral point for the common mode countermeasures. As electromagnet circuits are connected to the three-output terminal power sources, those are three-line type. In the inside of the power source circuits controlled by thyristors, there is the oscillation peculiar to the power sources, and the variation of voltage induces current spikes. This time, in order to assess the results of the common mode countermeasures in the power source and electromagnet circuits, as one method of cross-check, it is considered that since electromagnet current flows being divided to the bridging resistance and the coil, if attention is paid to the current on bridging resistance side, the ripple components of common mode and normal mode can be detected with high sensitivity, and this was verified. The present state of heightening the performance of synchrotron power sources is explained. The cross-check of the method of assessing the performance of electromagnet power sources is reported. The method of measuring ripple current and the results of the measurement are reported. (K.I.)

  1. Development of high sensitivity radon detectors

    CERN Document Server

    Takeuchi, Y; Kajita, T; Tasaka, S; Hori, H; Nemoto, M; Okazawa, H

    1999-01-01

    High sensitivity detectors for radon in air and in water have been developed. We use electrostatic collection and a PIN photodiode for these detectors. Calibration systems have been also constructed to obtain collection factors. As a result of the calibration study, the absolute humidity dependence of the radon detector for air is clearly observed in the region less than about 1.6 g/m sup 3. The calibration factors of the radon detector for air are 2.2+-0.2 (counts/day)/(mBq/m sup 3) at 0.08 g/m sup 3 and 0.86+-0.06 (counts/day)/(mBq/m sup 3) at 11 g/m sup 3. The calibration factor of the radon detector for water is 3.6+-0.5 (counts/day)/(mBq/m sup 3). The background level of the radon detector for air is 2.4+-1.3 counts/day. As a result, one standard deviation excess of the signal above the background of the radon detector for air should be possible for 1.4 mBq/m sup 3 in a one-day measurement at 0.08 g/m sup 3.

  2. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  3. Highly sensitive detection using microring resonator and nanopores

    Science.gov (United States)

    Bougot-Robin, K.; Hoste, J. W.; Le Thomas, N.; Bienstman, P.; Edel, J. B.

    2016-04-01

    One of the most significant challenges facing physical and biological scientists is the accurate detection and identification of single molecules in free-solution environments. The ability to perform such sensitive and selective measurements opens new avenues for a large number of applications in biological, medical and chemical analysis, where small sample volumes and low analyte concentrations are the norm. Access to information at the single or few molecules scale is rendered possible by a fine combination of recent advances in technologies. We propose a novel detection method that combines highly sensitive label-free resonant sensing obtained with high-Q microcavities and position control in nanoscale pores (nanopores). In addition to be label-free and highly sensitive, our technique is immobilization free and does not rely on surface biochemistry to bind probes on a chip. This is a significant advantage, both in term of biology uncertainties and fewer biological preparation steps. Through combination of high-Q photonic structures with translocation through nanopore at the end of a pipette, or through a solid-state membrane, we believe significant advances can be achieved in the field of biosensing. Silicon microrings are highly advantageous in term of sensitivity, multiplexing, and microfabrication and are chosen for this study. In term of nanopores, we both consider nanopore at the end of a nanopipette, with the pore being approach from the pipette with nanoprecise mechanical control. Alternatively, solid state nanopores can be fabricated through a membrane, supporting the ring. Both configuration are discussed in this paper, in term of implementation and sensitivity.

  4. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  5. Sensitivity analysis of the Two Geometry Method

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1993-09-01

    The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)

  6. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  7. High sensitivity MOSFET-based neutron dosimetry

    International Nuclear Information System (INIS)

    Fragopoulou, M.; Konstantakos, V.; Zamani, M.; Siskos, S.; Laopoulos, T.; Sarrabayrouse, G.

    2010-01-01

    A new dosemeter based on a metal-oxide-semiconductor field effect transistor sensitive to both neutrons and gamma radiation was manufactured at LAAS-CNRS Laboratory, Toulouse, France. In order to be used for neutron dosimetry, a thin film of lithium fluoride was deposited on the surface of the gate of the device. The characteristics of the dosemeter, such as the dependence of its response to neutron dose and dose rate, were investigated. The studied dosemeter was very sensitive to gamma rays compared to other dosemeters proposed in the literature. Its response in thermal neutrons was found to be much higher than in fast neutrons and gamma rays.

  8. Highly Sensitive and High-Throughput Analysis of Plant Hormones Using MS-Probe Modification and Liquid Chromatography–Tandem Mass Spectrometry: An Application for Hormone Profiling in Oryza sativa

    Science.gov (United States)

    Kojima, Mikiko; Kamada-Nobusada, Tomoe; Komatsu, Hirokazu; Takei, Kentaro; Kuroha, Takeshi; Mizutani, Masaharu; Ashikari, Motoyuki; Ueguchi-Tanaka, Miyako; Matsuoka, Makoto; Suzuki, Koji; Sakakibara, Hitoshi

    2009-01-01

    We have developed a highly sensitive and high-throughput method for the simultaneous analysis of 43 molecular species of cytokinins, auxins, ABA and gibberellins. This method consists of an automatic liquid handling system for solid phase extraction and ultra-performance liquid chromatography (UPLC) coupled with a tandem quadrupole mass spectrometer (qMS/MS) equipped with an electrospray interface (ESI; UPLC-ESI-qMS/MS). In order to improve the detection limit of negatively charged compounds, such as gibberellins, we chemically derivatized fractions containing auxin, ABA and gibberellins with bromocholine that has a quaternary ammonium functional group. This modification, that we call ‘MS-probe’, makes these hormone derivatives have a positive ion charge and permits all compounds to be measured in the positive ion mode with UPLC-ESI-qMS/MS in a single run. Consequently, quantification limits of gibberellins increased up to 50-fold. Our current method needs 180 plant samples simultaneously. Application of this method to plant hormone profiling enabled us to draw organ distribution maps of hormone species in rice and also to identify interactions among the four major hormones in the rice gibberellin signaling mutants, gid1-3, gid2-1 and slr1. Combining the results of hormone profiling data with transcriptome data in the gibberellin signaling mutants allows us to analyze relationships between changes in gene expression and hormone metabolism. PMID:19369275

  9. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  10. Spatial Heterodyne Observations of Water (SHOW) vapour in the upper troposphere and lower stratosphere from a high altitude aircraft: Modelling and sensitivity analysis

    Science.gov (United States)

    Langille, J. A.; Letros, D.; Zawada, D.; Bourassa, A.; Degenstein, D.; Solheim, B.

    2018-04-01

    A spatial heterodyne spectrometer (SHS) has been developed to measure the vertical distribution of water vapour in the upper troposphere and the lower stratosphere with a high vertical resolution (∼500 m). The Spatial Heterodyne Observations of Water (SHOW) instrument combines an imaging system with a monolithic field-widened SHS to observe limb scattered sunlight in a vibrational band of water (1363 nm-1366 nm). The instrument has been optimized for observations from NASA's ER-2 aircraft as a proof-of-concept for a future low earth orbit satellite deployment. A robust model has been developed to simulate SHOW ER-2 limb measurements and retrievals. This paper presents the simulation of the SHOW ER-2 limb measurements along a hypothetical flight track and examines the sensitivity of the measurement and retrieval approach. Water vapour fields from an Environment and Climate Change Canada forecast model are used to represent realistic spatial variability along the flight path. High spectral resolution limb scattered radiances are simulated using the SASKTRAN radiative transfer model. It is shown that the SHOW instrument onboard the ER-2 is capable of resolving the water vapour variability in the UTLS from approximately 12 km - 18 km with ±1 ppm accuracy. Vertical resolutions between 500 m and 1 km are feasible. The along track sampling capability of the instrument is also discussed.

  11. Environmental Sensitivity in Children: Development of the Highly Sensitive Child Scale and Identification of Sensitivity Groups

    Science.gov (United States)

    Pluess, Michael; Assary, Elham; Lionetti, Francesca; Lester, Kathryn J.; Krapohl, Eva; Aron, Elaine N.; Aron, Arthur

    2018-01-01

    A large number of studies document that children differ in the degree they are shaped by their developmental context with some being more sensitive to environmental influences than others. Multiple theories suggest that "Environmental Sensitivity" is a common trait predicting the response to negative as well as positive exposures.…

  12. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  13. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  14. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  15. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  16. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  17. High sensitivity optical measurement of skin gloss

    NARCIS (Netherlands)

    Ezerskaia, A.; Ras, Arno; Bloemen, Pascal; Pereira, S.F.; Urbach, Paul; Varghese, Babu

    2017-01-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between

  18. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  19. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  20. Time-Resolved Analysis of a Highly Sensitive Förster Resonance Energy Transfer Immunoassay Using Terbium Complexes as Donors and Quantum Dots as Acceptors

    Directory of Open Access Journals (Sweden)

    Niko Hildebrandt

    2007-01-01

    Full Text Available CdSe/ZnS core/shell quantum dots (QDs are used as efficient Förster Resonance Energy Transfer (FRET acceptors in a time-resolved immunoassays with Tb complexes as donors providing a long-lived luminescence decay. A detailed decay time analysis of the FRET process is presented. QD FRET sensitization is evidenced by a more than 1000-fold increase of the QD luminescence decay time reaching ca. 0.5 milliseconds, the same value to which the Tb donor decay time is quenched due to FRET to the QD acceptors. The FRET system has an extremely large Förster radius of approx. 100 Å and more than 70% FRET efficiency with a mean donor-acceptor distance of ca. 84 Å, confirming the applied biotin-streptavidin binding system. Time-resolved measurement allows for suppression of short-lived emission due to background fluorescence and directly excited QDs. By this means a detection limit of 18 attomol QDs within the immunoassay is accomplished, an improvement of more than two orders of magnitude compared to commercial systems.

  1. High-sensitivity direct analysis of aflatoxins in peanuts and cereal matrices by ultra-performance liquid chromatography with fluorescence detection involving a large volume flow cell.

    Science.gov (United States)

    Oulkar, Dasharath; Goon, Arnab; Dhanshetty, Manisha; Khan, Zareen; Satav, Sagar; Banerjee, Kaushik

    2018-04-03

    This paper reports a sensitive and cost effective method of analysis for aflatoxins B1, B2, G1 and G2. The sample preparation method was primarily optimised in peanuts, followed by its validation in a range of peanut-processed products and cereal (rice, corn, millets) matrices. Peanut slurry [12.5 g peanut + 12.5 mL water] was extracted with methanol: water (8:2, 100 mL), cleaned through an immunoaffinity column and thereafter measured directly by ultra-performance liquid chromatography-fluorescence (UPLC-FLD) detection, within a chromatographic runtime of 5 minutes. The use of a large volume flow cell in the FLD nullified the requirement of any post-column derivatisation and provided the lowest ever reported limits of quantification of 0.025 for B1 and G1 and 0.01 μg/kg for B2 and G2. The single laboratory validation of the method provided acceptable selectivity, linearity, recovery and precision for reliable quantifications in all the test matrices as well as demonstrated compliance with the EC 401/2006 guidelines for analytical quality control of aflatoxins in foodstuffs.

  2. Simple Sensitivity Analysis for Orion GNC

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  3. Sensitivity analysis of floating offshore wind farms

    International Nuclear Information System (INIS)

    Castro-Santos, Laura; Diaz-Casas, Vicente

    2015-01-01

    Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future

  4. High sensitivity optical measurement of skin gloss

    OpenAIRE

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F.; Urbach, H. Paul; Varghese, Babu

    2017-01-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the...

  5. Simple and sensitive monitoring of sulfonamide veterinary residues in milk by stir bar sorptive extraction based on monolithic material and high performance liquid chromatography analysis.

    Science.gov (United States)

    Huang, Xiaojia; Qiu, Ningning; Yuan, Dongxing

    2009-11-13

    A simple, rapid, and sensitive method for the quantitative monitoring of five sulfonamide antibacterial residues (SAs) in milk was developed by stir bar sorptive extraction (SBSE) coupling to high performance liquid chromatography with diode array detection. The analytes were concentrated by SBSE based on poly (vinylimidazole-divinylbenzene) monolithic material as coating. The extraction procedure was very simple, milk was diluted with water then directly sorptive extraction without elimination of fats and protein in samples was required. To achieve optimum extraction performance for SAs, several parameters, including extraction and desorption time, desorption solvent, ionic strength and pH value of sample matrix were investigated. Under the optimized experimental conditions, low detection limits (S/N=3) quantification limits (S/N=10) of the proposed method for the target compounds were achieved within the range of 1.30-7.90 ng/mL and 4.29-26.3 ng/mL from spiked milk, respectively. Good linearities were obtained for SAs with the correlation coefficients (R(2)) above 0.996. Finally, the proposed method was successfully applied to the determination of SAs compounds in different milk samples and satisfied recoveries of spiked target compounds in real samples were obtained.

  6. Analysis of the relationship of leptin, high-sensitivity C-reactive protein, adiponectin, insulin, and uric acid to metabolic syndrome in lean, overweight, and obese young females.

    Science.gov (United States)

    Abdullah, Abdul Ridha; Hasan, Haydar A; Raigangar, Veena L

    2009-02-01

    Over the last decade there has been a steady rise in obesity and co-morbidity, but little is known about the rate of metabolic dysfunction among young adults in the United Arab Emirates. Various factors have been implicated as biomarkers of metabolic syndrome. The objective of this study was to analyze the relationships of leptin, C-reactive protein (CRP), adiponectin, insulin, and uric acid to the metabolic syndrome components in lean, overweight, and obese young females. This was a cross-sectional study of 69 apparently healthy young females, who were classified according to their body mass index (BMI) (kg/m(2)) into three groups: lean (25 and or=30). Estimated biomarkers were: leptin, insulin, adiponectin, high-sensitivity [hs]-CRP, uric acid, blood sugar, high-density lipoprotein (HDL), low-density lipoprotein (LDL), total cholesterol, and triglycerides (TG). Anthropometric measures, blood pressure, and homeostasis model assessment-insulin resistance (HOMA-IR) were also measured. Serum leptin, hs-CRP, insulin, and uric acid increased significantly (p syndrome components was found in lean subjects (leptin vs. waist circumference r = 0.48) as opposed to six in the obese group (hs-CRP vs. waist circumference and systolic blood pressure [SBP], r = 0.45 and r = -0.41, respectively; insulin vs. diastolic blood pressure [DBP], r = 0.47; adiponectin vs. blood sugar, r = -0.44; and uric acid vs. waist circumference and TG, r = 0.5 and r = 0.51, respectively). Estimation of the levels of studied biomarkers could be an important tool for early detection of metabolic syndrome before the appearance of its frank components. Uric acid seems to be the most reliable biomarker to identify obese subjects with metabolic syndrome.

  7. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  8. High sensitivity optical measurement of skin gloss.

    Science.gov (United States)

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F; Urbach, H Paul; Varghese, Babu

    2017-09-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents.

  9. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  10. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  11. Analysis of AtGUS1 and AtGUS2 in Arabidopsis root apex by a highly sensitive TSA-MISH method.

    Science.gov (United States)

    Bruno, Leonardo; Ronchini, Matteo; Gagliardi, Olimpia; Corinti, Tamara; Chiappetta, Adriana; Gerola, Paolo; Bitonti, Maria B

    2015-01-01

    A new highly sensitive whole-mount in situ hybridization method, based on tyramide signal amplification (TSA-MISH) was developed and a combined GFP detection and TSA-MISH procedure was applied for the first time in plants, to precisely define the spatial pattern of AtGUS1 and AtGUS2 expression in the root apex. β-glucuronidases (GUSs) belonging to the glycosyl hydrolases (GHs) 79 family, are widely distributed in plants, but their functional role has not yet been fully investigated. In the model system Arabidopsis Thaliana, three different AtGUS genes have been identified which encode proteins with putative different fates. Endogenous GUS expression has been detected in different organs and tissues, but the cyto-histological domains of gene expression remain unclear. The results here reported show co-expression of AtGUS1 and AtGUS2 in different functional zones of the root apex (the cap central zone, the root cap meristem, the staminal cell niche and the cortical cell layers of the proximal meristem), while AtGUS2 is exclusively expressed in the cap peripheral layer and in the epidermis in the elongation zone. Interestingly, both genes are not expressed in the stelar portion of the proximal meristem. A spatial (cortex vs. stele) and temporal (proximal meristem vs. transition zone) regulation of AtGUS1 and AtGUS2 expression is therefore active in the root apex. This expression pattern, although globally consistent with the involvement of GUS activity in both cell proliferation and elongation, clearly indicates that AtGUS1 and AtGUS2 could control distinct downstream process depending on the developmental context and the interaction with other players of root growth control. In the future, the newly developed approaches may well be very useful to dissect such interactions.

  12. High-sensitivity C-reactive protein to detect metabolic syndrome in a centrally obese population: a cross-sectional analysis

    Directory of Open Access Journals (Sweden)

    den Engelsen Corine

    2012-03-01

    Full Text Available Abstract Background People with central obesity have an increased risk for developing the metabolic syndrome, type 2 diabetes and cardiovascular disease. However, a substantial part of obese individuals have no other cardiovascular risk factors, besides their obesity. High sensitivity C-reactive protein (hs-CRP, a marker of systemic inflammation and a predictor of type 2 diabetes and cardiovascular disease, is associated with the metabolic syndrome and its separate components. We evaluated the use of hs-CRP to discriminate between centrally obese people with and without the metabolic syndrome. Methods 1165 people with central obesity but without any previous diagnosis of hypertension, dyslipidemia, diabetes or cardiovascular disease, aged 20-70 years, underwent a physical examination and laboratory assays to determine the presence of the metabolic syndrome (NCEP ATP III criteria. Multivariable linear regression analyses were performed to assess which metabolic syndrome components were independently associated with hs-CRP. A ROC curve was drawn and the area under the curve was calculated to evaluate whether hs-CRP was capable to predict the presence of the metabolic syndrome. Results Median hs-CRP levels were significantly higher in individuals with central obesity with the metabolic syndrome (n = 417; 35.8% compared to individuals with central obesity without the metabolic syndrome (2.2 mg/L (IQR 1.2-4.0 versus 1.7 mg/L (IQR 1.0-3.4; p Conclusions Hs-CRP has limited capacity to predict the presence of the metabolic syndrome in a population with central obesity.

  13. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  14. A new importance measure for sensitivity analysis

    International Nuclear Information System (INIS)

    Liu, Qiao; Homma, Toshimitsu

    2010-01-01

    Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)

  15. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  16. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  17. Scanning Auger microscopy for high lateral and depth elemental sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, E., E-mail: eugenie.martinez@cea.fr [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Yadav, P. [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Bouttemy, M. [Institut Lavoisier de Versailles, 45 av. des Etats-Unis, 78035 Versailles Cedex (France); Renault, O.; Borowik, Ł.; Bertin, F. [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France); Etcheberry, A. [Institut Lavoisier de Versailles, 45 av. des Etats-Unis, 78035 Versailles Cedex (France); Chabli, A. [CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9 (France)

    2013-12-15

    Highlights: •SAM performances and limitations are illustrated on real practical cases such as the analysis of nanowires and nanodots. •High spatial elemental resolution is shown with the analysis of reference semiconducting Al{sub 0.7}Ga{sub 0.3}As/GaAs multilayers. •High in-depth elemental resolution is also illustrated. Auger depth profiling with low energy ion beams allows revealing ultra-thin layers (∼1 nm). •Analysis of cross-sectional samples is another effective approach to obtain in-depth elemental information. -- Abstract: Scanning Auger microscopy is currently gaining interest for investigating nanostructures or thin multilayers stacks developed for nanotechnologies. New generation Auger nanoprobes combine high lateral (∼10 nm), energy (0.1%) and depth (∼2 nm) resolutions thus offering the possibility to analyze the elemental composition as well as the chemical state, at the nanometre scale. We report here on the performances and limitations on practical examples from nanotechnology research. The spatial elemental sensitivity is illustrated with the analysis of Al{sub 0.7}Ga{sub 0.3}As/GaAs heterostructures, Si nanowires and SiC nanodots. Regarding the elemental in-depth composition, two effective approaches are presented: low energy depth profiling to reveal ultra-thin layers (∼1 nm) and analysis of cross-sectional samples.

  18. The selectively bred high alcohol sensitivity (HAS) and low alcohol sensitivity (LAS) rats differ in sensitivity to nicotine.

    Science.gov (United States)

    de Fiebre, NancyEllen C; Dawson, Ralph; de Fiebre, Christopher M

    2002-06-01

    Studies in rodents selectively bred to differ in alcohol sensitivity have suggested that nicotine and ethanol sensitivities may cosegregate during selective breeding. This suggests that ethanol and nicotine sensitivities may in part be genetically correlated. Male and female high alcohol sensitivity (HAS), control alcohol sensitivity, and low alcohol sensitivity (LAS) rats were tested for nicotine-induced alterations in locomotor activity, body temperature, and seizure activity. Plasma and brain levels of nicotine and its primary metabolite, cotinine, were measured in these animals, as was the binding of [3H]cytisine, [3H]epibatidine, and [125I]alpha-bungarotoxin in eight brain regions. Both replicate HAS lines were more sensitive to nicotine-induced locomotor activity depression than the replicate LAS lines. No consistent HAS/LAS differences were seen on other measures of nicotine sensitivity; however, females were more susceptible to nicotine-induced seizures than males. No HAS/LAS differences in nicotine or cotinine levels were seen, nor were differences seen in the binding of nicotinic ligands. Females had higher levels of plasma cotinine and brain nicotine than males but had lower brain cotinine levels than males. Sensitivity to a specific action of nicotine cosegregates during selective breeding for differential sensitivity to a specific action of ethanol. The differential sensitivity of the HAS/LAS rats is due to differences in central nervous system sensitivity and not to pharmacokinetic differences. The differential central nervous system sensitivity cannot be explained by differences in the numbers of nicotinic receptors labeled in ligand-binding experiments. The apparent genetic correlation between ethanol and nicotine sensitivities suggests that common genes modulate, in part, the actions of both ethanol and nicotine and may explain the frequent coabuse of these agents.

  19. A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils

    Directory of Open Access Journals (Sweden)

    Xiaodong Xie

    2014-05-01

    Full Text Available We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.

  20. Ultra-Sensitive Elemental Analysis Using Plasmas 5.Speciation of Arsenic Compounds in Biological Samples by High Performance Liquid Chromatography-Inductively Coupled Plasma Mass Spectrometry System

    Science.gov (United States)

    Kaise, Toshikazu

    Arsenic originating from the lithosphere is widely distributed in the environment. Many arsenicals in the environment are in organic and methylated species. These arsenic compounds in drinking water or food products of marine origin are absorbed in human digestive tracts, metabolized in the human body, and excreted viatheurine. Because arsenic shows varying biological a spects depending on its chemical species, the biological characteristics of arsenic must be determined. It is thought that some metabolic pathways for arsenic and some arsenic circulation exist in aqueous ecosystems. In this paper, the current status of the speciation analysis of arsenic by HPLC/ICP-MS (High Performance Liquid Chromatography-Inductively Coupled Plasma Mass spectrometry) in environmental and biological samples is summarized using recent data.

  1. Position sensitive detection of neutrons in high radiation background field.

    Science.gov (United States)

    Vavrik, D; Jakubek, J; Pospisil, S; Vacik, J

    2014-01-01

    We present the development of a high-resolution position sensitive device for detection of slow neutrons in the environment of extremely high γ and e(-) radiation background. We make use of a planar silicon pixelated (pixel size: 55 × 55 μm(2)) spectroscopic Timepix detector adapted for neutron detection utilizing very thin (10)B converter placed onto detector surface. We demonstrate that electromagnetic radiation background can be discriminated from the neutron signal utilizing the fact that each particle type produces characteristic ionization tracks in the pixelated detector. Particular tracks can be distinguished by their 2D shape (in the detector plane) and spectroscopic response using single event analysis. A Cd sheet served as thermal neutron stopper as well as intensive source of gamma rays and energetic electrons. Highly efficient discrimination was successful even at very low neutron to electromagnetic background ratio about 10(-4).

  2. Hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole tandem mass spectrometry for highly rapid and sensitive analysis of underivatized amino acids in functional foods.

    Science.gov (United States)

    Zhou, Guisheng; Pang, Hanqing; Tang, Yuping; Yao, Xin; Mo, Xuan; Zhu, Shaoqing; Guo, Sheng; Qian, Dawei; Qian, Yefei; Su, Shulan; Zhang, Li; Jin, Chun; Qin, Yong; Duan, Jin-ao

    2013-05-01

    This work presented a new analytical methodology based on hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole tandem mass spectrometry in multiple-reaction monitoring mode for analysis of 24 underivatized free amino acids (FAAs) in functional foods. The proposed method was first reported and validated by assessing the matrix effects, linearity, limit of detections and limit of quantifications, precision, repeatability, stability and recovery of all target compounds, and it was used to determine the nutritional substances of FAAs in ginkgo seeds and further elucidate the nutritional value of this functional food. The result showed that ginkgo seed turned out to be a good source of FAAs with high levels of several essential FAAs and to have a good nutritional value. Furthermore, the principal component analysis was performed to classify the ginkgo seed samples on the basis of 24 FAAs. As a result, the samples could be mainly clustered into three groups, which were similar to areas classification. Overall, the presented method would be useful for the investigation of amino acids in edible plants and agricultural products.

  3. Expression profiling analysis: Uncoupling protein 2 deficiency improves hepatic glucose, lipid profiles and insulin sensitivity in high-fat diet-fed mice by modulating expression of genes in peroxisome proliferator-activated receptor signaling pathway.

    Science.gov (United States)

    Zhou, Mei-Cen; Yu, Ping; Sun, Qi; Li, Yu-Xiu

    2016-03-01

    Uncoupling protein 2 (UCP2), which was an important mitochondrial inner membrane protein associated with glucose and lipid metabolism, widely expresses in all kinds of tissues including hepatocytes. The present study aimed to explore the impact of UCP2 deficiency on glucose and lipid metabolism, insulin sensitivity and its effect on the liver-associated signaling pathway by expression profiling analysis. Four-week-old male UCP2-/- mice and UCP2+/+ mice were randomly assigned to four groups: UCP2-/- on a high-fat diet, UCP2-/- on a normal chow diet, UCP2+/+ on a high-fat diet and UCP2+/+ on a normal chow diet. The differentially expressed genes in the four groups on the 16th week were identified by Affymetrix gene array. The results of intraperitoneal glucose tolerance test and insulin tolerance showed that blood glucose and β-cell function were improved in the UCP2-/- group on high-fat diet. Enhanced insulin sensitivity was observed in the UCP2-/- group. The differentially expressed genes were mapped to 23 pathways (P high-fat diet. The upregulation of genes in the PPAR signaling pathway could explain our finding that UCP2 deficiency ameliorated insulin sensitivity. The manipulation of UCP2 protein expression could represent a new strategy for the prevention and treatment of diabetes.

  4. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    International Nuclear Information System (INIS)

    Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C

    2015-01-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10 −2  nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)

  5. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  6. Global sensitivity analysis in wind energy assessment

    Science.gov (United States)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  7. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  8. High-Sensitivity GaN Microchemical Sensors

    Science.gov (United States)

    Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas

    2009-01-01

    Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.

  9. Development of a highly sensitive lithium fluoride thermoluminescence dosimeter

    International Nuclear Information System (INIS)

    Moraes da Silva, Teresinha de; Campos, Leticia Lucente

    1995-01-01

    In recent times, LiF: Mg, Cu, P thermoluminescent phosphor has been increasingly in use for radiation monitoring due its high sensitivity and ease of preparation. The Dosimetric Materials Production Laboratory of IPEN, (Nuclear Energy Institute) has developed a simple method to obtain high sensitivity LiF. The preparation method is described. (author). 4 refs., 1 fig., 1 tab

  10. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    OpenAIRE

    Harry R. Millwater; R. Wesley Osborn

    2006-01-01

    A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel) and common damage mechanisms (inherent defects or su...

  11. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  12. A highly sensitive chemical gas detecting device based on N-doped ZnO as a modified nanostructure media: A DFT+NBO analysis

    Science.gov (United States)

    Abbasi, Amirali; Sardroodi, Jaber Jahanbin

    2018-02-01

    We presented a density functional theory study of the adsorption of O3 and NO2 molecules on ZnO nanoparticles. Various adsorption geometries of O3 and NO2 over the nanoparticles were considered. For both O3 and NO2 adsorption systems, it was found that the adsorption on the N-doped nanoparticle is more favorable in energy than that on the pristine one. Therefore, the N-doped ZnO has a better efficiency to be utilized as O3 and NO2 detection device. For all cases, the binding sites were located on the zinc atoms of the nanoparticle. The charge analysis based on natural bond orbital (NBO) analysis indicates that charge was transferred from the surface to the adsorbed molecule. The projected density of states of the interacting atoms represent the formation of chemical bonds at the interface region. Molecular orbitals of the adsorption systems indicate that the HOMOs were mainly localized on the adsorbed O3 and NO2 molecules, whereas the electronic densities in the LUMOs were dominant at the ZnO nanocrystal surface. By examining the distribution of spin densities, we found that the magnetization was mainly located over the adsorbed molecules. For NO2 adsorbate, we found that the symmetric and asymmetric stretches were shifted to a lower frequency. The bending stretch mode was shifted to the higher frequency. Our DFT results thus provide a theoretical basis for why the adsorption of O3 and NO2 molecules on the N-doped ZnO nanoparticles may increase, giving rise to design and development of innovative and highly efficient sensor devices for O3 and NO2 recognition.

  13. Triggers for a high sensitivity charm experiment

    International Nuclear Information System (INIS)

    Christian, D.C.

    1994-07-01

    Any future charm experiment clearly should implement an E T trigger and a μ trigger. In order to reach the 10 8 reconstructed charm level for hadronic final states, a high quality vertex trigger will almost certainly also be necessary. The best hope for the development of an offline quality vertex trigger lies in further development of the ideas of data-driven processing pioneered by the Nevis/U. Mass. group

  14. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    OpenAIRE

    Karolina Chwialkowska; Urszula Korotko; Joanna Kosinska; Iwona Szarejko; Miroslaw Kwasniewski

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing ...

  15. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  16. Highly efficient electrocatalytic vapor generation of methylmercury based on the gold particles deposited glassy carbon electrode: A typical application for sensitive mercury speciation analysis in fish samples.

    Science.gov (United States)

    Shi, Meng-Ting; Yang, Xin-An; Qin, Li-Ming; Zhang, Wang-Bing

    2018-09-26

    A gold particle deposited glassy carbon electrode (Au/GCE) was first used in electrochemical vapor generation (ECVG) technology and demonstrated to have excellent catalytic property for the electrochemical conversion process of aqueous mercury, especially for methylmercury (CH 3 Hg + ), to gaseous mercury. Systematical research has shown that the highly consistent or distinct difference between the atomic fluorescence spectroscopy signals of CH 3 Hg + and Hg 2+ can be achieved by controlling the electrolytic parameters of ECVG. Hereby, a new green and accurate method for mercury speciation analysis based on the distinguishing electrochemical reaction behavior of Hg 2+ and CH 3 Hg +  on the modified electrode was firstly established. Furthermore, electrochemical impedance spectra and the square wave voltammetry displayed that the ECVG reaction of CH 3 Hg +  may belong to the electrocatalytic mechanism. Under the selected conditions, the limits of detection of Hg 2+ and CH 3 Hg +  are 5.3 ng L -1 and 4.4 ng L -1 for liquid samples and 0.53 pg mg -1 and 0.44 pg mg -1 for solid samples, respectively. The precision of the 5 measurements is less than 6% within the concentration of Hg 2+ and CH 3 Hg +  ranging from 0.2 to 15.0 μg L -1 . The accuracy and practicability of the proposed method was verified by analyzing the mercury content in the certified reference material and several fish as well as water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  18. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  19. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  20. An ESDIRK Method with Sensitivity Analysis Capabilities

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove

    2004-01-01

    of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...

  1. Low Power and High Sensitivity MOSFET-Based Pressure Sensor

    International Nuclear Information System (INIS)

    Zhang Zhao-Hua; Ren Tian-Ling; Zhang Yan-Hong; Han Rui-Rui; Liu Li-Tian

    2012-01-01

    Based on the metal-oxide-semiconductor field effect transistor (MOSFET) stress sensitive phenomenon, a low power MOSFET pressure sensor is proposed. Compared with the traditional piezoresistive pressure sensor, the present pressure sensor displays high performances on sensitivity and power consumption. The sensitivity of the MOSFET sensor is raised by 87%, meanwhile the power consumption is decreased by 20%. (cross-disciplinary physics and related areas of science and technology)

  2. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  4. High pressure-sensitive gene expression in Lactobacillus sanfranciscensis

    Directory of Open Access Journals (Sweden)

    R.F. Vogel

    2005-08-01

    Full Text Available Lactobacillus sanfranciscensis is a Gram-positive lactic acid bacterium used in food biotechnology. It is necessary to investigate many aspects of a model organism to elucidate mechanisms of stress response, to facilitate preparation, application and performance in food fermentation, to understand mechanisms of inactivation, and to identify novel tools for high pressure biotechnology. To investigate the mechanisms of the complex bacterial response to high pressure we have analyzed changes in the proteome and transcriptome by 2-D electrophoresis, and by microarrays and real time PCR, respectively. More than 16 proteins were found to be differentially expressed upon high pressure stress and were compared to those sensitive to other stresses. Except for one apparently high pressure-specific stress protein, no pressure-specific stress proteins were found, and the proteome response to pressure was found to differ from that induced by other stresses. Selected pressure-sensitive proteins were partially sequenced and their genes were identified by reverse genetics. In a transcriptome analysis of a redundancy cleared shot gun library, about 7% of the genes investigated were found to be affected. Most of them appeared to be up-regulated 2- to 4-fold and these results were confirmed by real time PCR. Gene induction was shown for some genes up-regulated at the proteome level (clpL/groEL/rbsK, while the response of others to high hydrostatic pressure at the transcriptome level seemed to differ from that observed at the proteome level. The up-regulation of selected genes supports the view that the cell tries to compensate for pressure-induced impairment of translation and membrane transport.

  5. Superconducting Accelerating Cavity Pressure Sensitivity Analysis

    International Nuclear Information System (INIS)

    Rodnizki, J.; Horvits, Z.; Ben Aliz, Y.; Grin, A.; Weissman, L.

    2014-01-01

    The measured sensitivity of the cavity was evaluated and it is full consistent with the measured values. It was explored that the tuning system (the fog structure) has a significant contribution to the cavity sensitivity. By using ribs or by modifying the rigidity of the fog we may reduce the HWR sensitivity. During cool down and warming up we have to analyze the stresses on the HWR to avoid plastic deformation to the HWR since the Niobium yield is an order of magnitude lower in room temperature

  6. Analysis of hepatitis B surface antigen (HBsAg) using high-sensitivity HBsAg assays in hepatitis B virus carriers in whom HBsAg seroclearance was confirmed by conventional assays.

    Science.gov (United States)

    Ozeki, Itaru; Nakajima, Tomoaki; Suii, Hirokazu; Tatsumi, Ryoji; Yamaguchi, Masakatsu; Kimura, Mutsuumi; Arakawa, Tomohiro; Kuwata, Yasuaki; Ohmura, Takumi; Hige, Shuhei; Karino, Yoshiyasu; Toyota, Joji

    2018-02-01

    We investigated the utility of high-sensitivity hepatitis B surface antigen (HBsAg) assays compared with conventional HBsAg assays. Using serum samples from 114 hepatitis B virus (HBV) carriers in whom HBsAg seroclearance was confirmed by conventional HBsAg assays (cut-off value, 0.05 IU/mL), the amount of HBsAg was re-examined by high-sensitivity HBsAg assays (cut-off value, 0.005 IU/mL). Cases negative for HBsAg in both assays were defined as consistent cases, and cases positive for HBsAg in the high-sensitivity HBsAg assay only were defined as discrepant cases. There were 55 (48.2%) discrepant cases, and the range of HBsAg titers determined by high-sensitivity HBsAg assays was 0.005-0.056 IU/mL. Multivariate analysis showed that the presence of nucleos(t)ide analog therapy, liver cirrhosis, and negative anti-HBs contributed to the discrepancies between the two assays. Cumulative anti-HBs positivity rates among discrepant cases were 12.7%, 17.2%, 38.8%, and 43.9% at baseline, 1 year, 3 years, and 5 years, respectively, whereas the corresponding rates among consistent cases were 50.8%, 56.0%, 61.7%, and 68.0%, respectively. Hepatitis B virus DNA negativity rates were 56.4% and 81.4% at baseline, 51.3% and 83.3% at 1 year, and 36.8% and 95.7% at 3 years, among discrepant and consistent cases, respectively. Hepatitis B surface antigen reversion was observed only in discrepant cases. Re-examination by high-sensitivity HBsAg assays revealed that HBsAg was positive in approximately 50% of cases. Cumulative anti-HBs seroconversion rates and HBV-DNA seroclearance rates were lower in these cases, suggesting a population at risk for HBsAg reversion. © 2017 The Japan Society of Hepatology.

  7. Derivative based sensitivity analysis of gamma index

    Directory of Open Access Journals (Sweden)

    Biplab Sarkar

    2015-01-01

    Full Text Available Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD and distance-to-agreement (DTA measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm, the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA against the RP, the first and second order derivatives of the DDs (δD', δD" between these two curves were derived and used as the

  8. MOVES2010a regional level sensitivity analysis

    Science.gov (United States)

    2012-12-10

    This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...

  9. A new method of removing the high value feedback resistor in the charge sensitive preamplifier

    International Nuclear Information System (INIS)

    Xi Deming

    1993-01-01

    A new method of removing the high value feedback resistor in the charge sensitive preamplifier is introduced. The circuit analysis of this novel design is described and the measured performances of a practical circuit are provided

  10. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  11. Passenger Sharing of the High-Speed Railway from Sensitivity Analysis Caused by Price and Run-time Based on the Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2013-09-01

    Full Text Available Purpose: Nowadays, governments around the world are active in constructing the high-speed railway. Therefore, it is significant to make research on this increasingly prevalent transport.Design/methodology/approach: In this paper, we simulate the process of the passenger’s travel mode choice by adjusting the ticket fare and the run-time based on the multi-agent system (MAS.Findings: From the research we get the conclusion that increasing the run-time appropriately and reducing the ticket fare in some extent are effective ways to enhance the passenger sharing of the high-speed railway.Originality/value: We hope it can provide policy recommendations for the railway sectors in developing the long-term plan on high-speed railway in the future.

  12. Sensitivity Analysis of BLISK Airfoil Wear †

    Directory of Open Access Journals (Sweden)

    Andreas Kellersmann

    2018-05-01

    Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.

  13. High-Sensitivity Temperature-Independent Silicon Photonic Microfluidic Biosensors

    Science.gov (United States)

    Kim, Kangbaek

    Optical biosensors that can precisely quantify the presence of specific molecular species in real time without the need for labeling have seen increased use in the drug discovery industry and molecular biology in general. Of the many possible optical biosensors, the TM mode Si biosensor is shown to be very attractive in the sensing application because of large field amplitude on the surface and cost effective CMOS VLSI fabrication. Noise is the most fundamental factor that limits the performance of sensors in development of high-sensitivity biosensors, and noise reduction techniques require precise studies and analysis. One such example stems from thermal fluctuations. Generally SOI biosensors are vulnerable to ambient temperature fluctuations because of large thermo-optic coefficient of silicon (˜2x10 -4 RIU/K), typically requiring another reference ring and readout sequence to compensate temperature induced noise. To address this problem, we designed sensors with a novel TM-mode shallow-ridge waveguide that provides both large surface amplitude for bulk and surface sensing. With proper design, this also provides large optical confinement in the aqueous cladding that renders the device athermal using the negative thermo-optic coefficient of water (~ --1x10-4RIU/K), demonstrating cancellation of thermo-optic effects for aqueous solution operation near 300K. Additional limitations resulting from mechanical actuator fluctuations, stability of tunable lasers, and large 1/f noise of lasers and sensor electronics can limit biosensor performance. Here we also present a simple harmonic feedback readout technique that obviates the need for spectrometers and tunable lasers. This feedback technique reduces the impact of 1/f noise to enable high-sensitivity, and a DSP lock-in with 256 kHz sampling rate can provide down to micros time scale monitoring for fast transitions in biomolecular concentration with potential for small volume and low cost. In this dissertation, a novel

  14. NPV Sensitivity Analysis: A Dynamic Excel Approach

    Science.gov (United States)

    Mangiero, George A.; Kraten, Michael

    2017-01-01

    Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…

  15. Sensitivity Analysis for Multidisciplinary Systems (SAMS)

    Science.gov (United States)

    2016-12-01

    release. Distribution is unlimited. 14 Server and Client Code Server from geometry import Point, Geometry import math import zmq class Server...public release; Distribution is unlimited. DISTRIBUTION STATEMENT A: Approved for public release. Distribution is unlimited. 19 Example Application Boeing...Materials Conference, 2011. Cross, D. M., Local continuum sensitivity method for shape design derivatives using spatial gradient reconstruction. Diss

  16. Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion

    International Nuclear Information System (INIS)

    Garcia-Cabrejo, Oscar; Valocchi, Albert

    2014-01-01

    Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE

  17. Systemization of burnup sensitivity analysis code (2) (Contract research)

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2008-08-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion

  18. A high-sensitivity neutron counter and waste-drum counting with the high-sensitivity neutron instrument

    International Nuclear Information System (INIS)

    Hankins, D.E.; Thorngate, J.H.

    1993-04-01

    At Lawrence Livermore National Laboratory (LLNL), a highly sensitive neutron counter was developed that can detect and accurately measure the neutrons from small quantities of plutonium or from other low-level neutron sources. This neutron counter was originally designed to survey waste containers leaving the Plutonium Facility. However, it has proven to be useful in other research applications requiring a high-sensitivity neutron instrument

  19. Sensitivity Analysis of Temperature Control Parameters and Study of the Simultaneous Cooling Zone during Dam Construction in High-Altitude Regions

    Directory of Open Access Journals (Sweden)

    Zhenhong Wang

    2015-01-01

    Full Text Available There are unprecedented difficulties in building concrete gravity dams in the high altitude province Tibet with problems induced by lack of experience and technologies and unique weather conditions, as well as the adoption of construction materials that are disadvantageous to temperature control and crack prevention. Based on the understandings of the mentioned problems and leveraging the need of building gravity dam in Tibet, 3D finite element method is used to study the temperature control and crack prevention of the dam during construction. The calculation under recommend temperature control measures and standards shows that the height and number of simultaneous cooling zone have the more obvious influencers on concrete stress; therefore, it is suggested to increase the height of simultaneous cooling zone to decrease the stress caused by temperature gradient of adjoin layers so as to raise the safety level of the whole project. The research methods and ideas used on this project have significant values and can be taken as references in similar projects in high altitude regions.

  20. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  1. Sensitive determination of thiols in wine samples by a stable isotope-coded derivatization reagent d0/d4-acridone-10-ethyl-N-maleimide coupled with high-performance liquid chromatography-electrospray ionization-tandem mass spectrometry analysis.

    Science.gov (United States)

    Lv, Zhengxian; You, Jinmao; Lu, Shuaimin; Sun, Weidi; Ji, Zhongyin; Sun, Zhiwei; Song, Cuihua; Chen, Guang; Li, Guoliang; Hu, Na; Zhou, Wu; Suo, Yourui

    2017-03-31

    As the key aroma compounds, varietal thiols are the crucial odorants responsible for the flavor of wines. Quantitative analysis of thiols can provide crucial information for the aroma profiles of different wine styles. In this study, a rapid and sensitive method for the simultaneous determination of six thiols in wine using d 0 /d 4 -acridone-10-ethyl-N-maleimide (d 0 /d 4 -AENM) as stable isotope-coded derivatization reagent (SICD) by high performance liquid chromatography-electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS/MS) has been developed. Quantification of thiols was performed by using d 4 -AENM labeled thiols as the internal standards (IS), followed by stable isotope dilution HPLC-ESI-MS/MS analysis. The AENM derivatization combined with multiple reactions monitoring (MRM) not only allowed trace analysis of thiols due to the extremely high sensitivity, but also efficiently corrected the matrix effects during HPLC-MS/MS and the fluctuation in MS/MS signal intensity due to instrument. The obtained internal standard calibration curves for six thiols were linear over the range of 25-10,000pmol/L (R 2 ≥0.9961). Detection limits (LODs) for most of analytes were below 6.3pmol/L. The proposed method was successfully applied for the simultaneous determination of six kinds of thiols in wine samples with precisions ≤3.5% and recoveries ≥78.1%. In conclusion, the developed method is expected to be a promising tool for detection of trace thiols in wine and also in other complex matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  3. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  4. Automated sensitivity analysis using the GRESS language

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies

  5. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    Science.gov (United States)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  6. Highly sensitive and selective hyphenated technique (molecularly imprinted polymer solid-phase microextraction-molecularly imprinted polymer sensor) for ultra trace analysis of aspartic acid enantiomers.

    Science.gov (United States)

    Prasad, Bhim Bali; Srivastava, Amrita; Tiwari, Mahavir Prasad

    2013-03-29

    The present work is related to combination of molecularly imprinted solid-phase microextraction and complementary molecularly imprinted polymer-sensor. The molecularly imprinted polymer grafted on titanium dioxide modified silica fiber was used for microextraction, while the same polymer immobilized on multiwalled carbon nanotubes/titanium dioxide modified pencil graphite electrode served as a detection tool. In both cases, the surface initiated polymerization was found to be advantageous to obtain a nanometer thin imprinted film. The modified silica fiber exhibited high adsorption capacity and enantioselective diffusion of aspartic acid isomers into respective molecular cavities. This combination enabled double preconcentrations of d- and l-aspartic acid that helped sensing both isomers in real samples, without any cross-selectivity and matrix complications. Taking into account 6×10(4)-fold dilution of serum and 2×10(3)-fold dilution of cerebrospinal fluid required by the proposed method, the limit of detection for l-aspartic acid is 0.031ngmL(-1). Also, taking into account 50-fold dilution required by the proposed method, the limit of detection for d-aspartic acid is 0.031ngmL(-1) in cerebrospinal fluid. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  8. Achieving sensitive, high-resolution laser spectroscopy at CRIS

    Energy Technology Data Exchange (ETDEWEB)

    Groote, R. P. de [Instituut voor Kern- en Stralingsfysica, KU Leuven (Belgium); Lynch, K. M., E-mail: kara.marie.lynch@cern.ch [EP Department, CERN, ISOLDE (Switzerland); Wilkins, S. G. [The University of Manchester, School of Physics and Astronomy (United Kingdom); Collaboration: the CRIS collaboration

    2017-11-15

    The Collinear Resonance Ionization Spectroscopy (CRIS) experiment, located at the ISOLDE facility, has recently performed high-resolution laser spectroscopy, with linewidths down to 20 MHz. In this article, we present the modifications to the beam line and the newly-installed laser systems that have made sensitive, high-resolution measurements possible. Highlights of recent experimental campaigns are presented.

  9. Analysis of RET promoter CpG island methylation using methylation-specific PCR (MSP), pyrosequencing, and methylation-sensitive high-resolution melting (MS-HRM): impact on stage II colon cancer patient outcome.

    Science.gov (United States)

    Draht, Muriel X G; Smits, Kim M; Jooste, Valérie; Tournier, Benjamin; Vervoort, Martijn; Ramaekers, Chantal; Chapusot, Caroline; Weijenberg, Matty P; van Engeland, Manon; Melotte, Veerle

    2016-01-01

    Already since the 1990s, promoter CpG island methylation markers have been considered promising diagnostic, prognostic, and predictive cancer biomarkers. However, so far, only a limited number of DNA methylation markers have been introduced into clinical practice. One reason why the vast majority of methylation markers do not translate into clinical applications is lack of independent validation of methylation markers, often caused by differences in methylation analysis techniques. We recently described RET promoter CpG island methylation as a potential prognostic marker in stage II colorectal cancer (CRC) patients of two independent series. In the current study, we analyzed the RET promoter CpG island methylation of 241 stage II colon cancer patients by direct methylation-specific PCR (MSP), nested-MSP, pyrosequencing, and methylation-sensitive high-resolution melting (MS-HRM). All primers were designed as close as possible to the same genomic region. In order to investigate the effect of different DNA methylation assays on patient outcome, we assessed the clinical sensitivity and specificity as well as the association of RET methylation with overall survival for three and five years of follow-up. Using direct-MSP and nested-MSP, 12.0 % (25/209) and 29.6 % (71/240) of the patients showed RET promoter CpG island methylation. Methylation frequencies detected by pyrosequencing were related to the threshold for positivity that defined RET methylation. Methylation frequencies obtained by pyrosequencing (threshold for positivity at 20 %) and MS-HRM were 13.3 % (32/240) and 13.8 % (33/239), respectively. The pyrosequencing threshold for positivity of 20 % showed the best correlation with MS-HRM and direct-MSP results. Nested-MSP detected RET promoter CpG island methylation in deceased patients with a higher sensitivity (33.1 %) compared to direct-MSP (10.7 %), pyrosequencing (14.4 %), and MS-HRM (15.4 %). While RET methylation frequencies detected by nested

  10. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  11. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  12. A global sensitivity analysis of crop virtual water content

    Science.gov (United States)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  13. Evaluation of cell count and classification capabilities in body fluids using a fully automated Sysmex XN equipped with high-sensitive Analysis (hsA) mode and DI-60 hematology analyzer system.

    Science.gov (United States)

    Takemura, Hiroyuki; Ai, Tomohiko; Kimura, Konobu; Nagasaka, Kaori; Takahashi, Toshihiro; Tsuchiya, Koji; Yang, Haeun; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Tabe, Yoko; Ohsaka, Akimichi

    2018-01-01

    The XN series automated hematology analyzer has been equipped with a body fluid (BF) mode to count and differentiate leukocytes in BF samples including cerebrospinal fluid (CSF). However, its diagnostic accuracy is not reliable for CSF samples with low cell concentration at the border between normal and pathologic level. To overcome this limitation, a new flow cytometry-based technology, termed "high sensitive analysis (hsA) mode," has been developed. In addition, the XN series analyzer has been equipped with the automated digital cell imaging analyzer DI-60 to classify cell morphology including normal leukocytes differential and abnormal malignant cells detection. Using various BF samples, we evaluated the performance of the XN-hsA mode and DI-60 compared to manual microscopic examination. The reproducibility of the XN-hsA mode showed good results in samples with low cell densities (coefficient of variation; % CV: 7.8% for 6 cells/μL). The linearity of the XN-hsA mode was established up to 938 cells/μL. The cell number obtained using the XN-hsA mode correlated highly with the corresponding microscopic examination. Good correlation was also observed between the DI-60 analyses and manual microscopic classification for all leukocyte types, except monocytes. In conclusion, the combined use of cell counting with the XN-hsA mode and automated morphological analyses using the DI-60 mode is potentially useful for the automated analysis of BF cells.

  14. Retinal sensitivity and choroidal thickness in high myopia.

    Science.gov (United States)

    Zaben, Ahmad; Zapata, Miguel Á; Garcia-Arumi, Jose

    2015-03-01

    To estimate the association between choroidal thickness in the macular area and retinal sensitivity in eyes with high myopia. This investigation was a transversal study of patients with high myopia, all of whom had their retinal sensitivity measured with macular integrity assessment microperimetry. The choroidal thicknesses in the macular area were then measured by optical coherence tomography, and statistical correlations between their functionality and the anatomical structuralism, as assessed by both types of measurements, were analyzed. Ninety-six eyes from 77 patients with high myopia were studied. The patients had a mean age ± standard deviation of 38.9 ± 13.2 years, with spherical equivalent values ranging from -6.00 diopter to -20.00 diopter (8.74 ± 2.73 diopter). The mean central choroidal thickness was 159.00 ± 50.57. The mean choroidal thickness was directly correlated with sensitivity (r = 0.306; P = 0.004) and visual acuity but indirectly correlated with the spherical equivalent values and patient age. The mean sensitivity was not significantly correlated with the macular foveal thickness (r = -0.174; P = 0.101) or with the overall macular thickness (r = 0.103; P = 0.334); furthermore, the mean sensitivity was significantly correlated with visual acuity (r = 0.431; P < 0.001) and the spherical equivalent values (r = -0.306; P = 0.003). Retinal sensitivity in highly myopic eyes is directly correlated with choroidal thickness and does not seem to be associated with retinal thickness. Thus, in patients with high myopia, accurate measurements of choroidal thickness may provide more accurate information about this pathologic condition because choroidal thickness correlates to a greater degree with the functional parameters, patient age, and spherical equivalent values.

  15. Analysis of Hydrological Sensitivity for Flood Risk Assessment

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Sharma

    2018-02-01

    Full Text Available In order for the Indian government to maximize Integrated Water Resource Management (IWRM, the Brahmaputra River has played an important role in the undertaking of the Pilot Basin Study (PBS due to the Brahmaputra River’s annual regional flooding. The selected Kulsi River—a part of Brahmaputra sub-basin—experienced severe floods in 2007 and 2008. In this study, the Rainfall-Runoff-Inundation (RRI hydrological model was used to simulate the recent historical flood in order to understand and improve the integrated flood risk management plan. The ultimate objective was to evaluate the sensitivity of hydrologic simulation using different Digital Elevation Model (DEM resources, coupled with DEM smoothing techniques, with a particular focus on the comparison of river discharge and flood inundation extent. As a result, the sensitivity analysis showed that, among the input parameters, the RRI model is highly sensitive to Manning’s roughness coefficient values for flood plains, followed by the source of the DEM, and then soil depth. After optimizing its parameters, the simulated inundation extent showed that the smoothing filter was more influential than its simulated discharge at the outlet. Finally, the calibrated and validated RRI model simulations agreed well with the observed discharge and the Moderate Imaging Spectroradiometer (MODIS-detected flood extents.

  16. BH3105 type neutron dose equivalent meter of high sensitivity

    International Nuclear Information System (INIS)

    Ji Changsong; Zhang Enshan; Yang Jianfeng; Zhang Hong; Huang Jiling

    1995-10-01

    It is noted that to design a neutron dose meter of high sensitivity is almost impossible in the frame of traditional designing principle--'absorption net principle'. Based on a newly proposed principle of obtaining neutron dose equi-biological effect adjustment--' absorption stick principle', a brand-new neutron dose-equivalent meter with high neutron sensitivity BH3105 has been developed. Its sensitivity reaches 10 cps/(μSv·h -1 ), which is 18∼40 times higher than one of foreign products of the same kind and is 10 4 times higher than that of domestic FJ342 neutron rem-meter. BH3105 has a measurement range from 0.1μSv/h to 1 Sv/h which is 1 or 2 orders wider than that of the other's. It has the advanced properties of gamma-resistance, energy response, orientation, etc. (6 tabs., 5 figs.)

  17. A high sensitivity nanomaterial based SAW humidity sensor

    Energy Technology Data Exchange (ETDEWEB)

    Wu, T-T; Chou, T-H [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Chen, Y-Y [Department of Mechanical Engineering, Tatung University, Taipei 104, Taiwan (China)], E-mail: wutt@ndt.iam.ntu.edu.tw

    2008-04-21

    In this paper, a highly sensitive humidity sensor is reported. The humidity sensor is configured by a 128{sup 0}YX-LiNbO{sub 3} based surface acoustic wave (SAW) resonator whose operating frequency is at 145 MHz. A dual delay line configuration is realized to eliminate external temperature fluctuations. Moreover, for nanostructured materials possessing high surface-to-volume ratio, large penetration depth and fast charge diffusion rate, camphor sulfonic acid doped polyaniline (PANI) nanofibres are synthesized by the interfacial polymerization method and further deposited on the SAW resonator as selective coating to enhance sensitivity. The humidity sensor is used to measure various relative humidities in the range 5-90% at room temperature. Results show that the PANI nanofibre based SAW humidity sensor exhibits excellent sensitivity and short-term repeatability.

  18. Performance of terahertz metamaterials as high-sensitivity sensor

    Science.gov (United States)

    He, Yanan; Zhang, Bo; Shen, Jingling

    2017-09-01

    A high-sensitivity sensor based on the resonant transmission characteristics of terahertz (THz) metamaterials was investigated, with the proposal and fabrication of rectangular bar arrays of THz metamaterials exhibiting a period of 180 μm on a 25 μm thick flexible polyimide. Varying the size of the metamaterial structure revealed that the length of the rectangular unit modulated the resonant frequency, which was verified by both experiment and simulation. The sensing characteristics upon varying the surrounding media in the sample were tested by simulation and experiment. Changing the surrounding medium from that of air to that of alcohol or oil produced resonant frequency redshifts of 80 GHz or 150 GHz, respectively, which indicates that the sensor possessed a high sensitivity of 667 GHz per unit of refractive index. Finally, the influence of the sample substrate thickness on the sensor sensitivity was investigated by simulation. It may be a reference for future sensor design.

  19. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  20. Advanced Fuel Cycle Economic Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  1. Sensitive Spectroscopic Analysis of Biomarkers in Exhaled Breath

    Science.gov (United States)

    Bicer, A.; Bounds, J.; Zhu, F.; Kolomenskii, A. A.; Kaya, N.; Aluauee, E.; Amani, M.; Schuessler, H. A.

    2018-06-01

    We have developed a novel optical setup which is based on a high finesse cavity and absorption laser spectroscopy in the near-IR spectral region. In pilot experiments, spectrally resolved absorption measurements of biomarkers in exhaled breath, such as methane and acetone, were carried out using cavity ring-down spectroscopy (CRDS). With a 172-cm-long cavity, an efficient optical path of 132 km was achieved. The CRDS technique is well suited for such measurements due to its high sensitivity and good spectral resolution. The detection limits for methane of 8 ppbv and acetone of 2.1 ppbv with spectral sampling of 0.005 cm-1 were achieved, which allowed to analyze multicomponent gas mixtures and to observe absorption peaks of 12CH4 and 13CH4. Further improvements of the technique have the potential to realize diagnostics of health conditions based on a multicomponent analysis of breath samples.

  2. High-speed high-sensitivity infrared spectroscopy using mid-infrared swept lasers (Conference Presentation)

    Science.gov (United States)

    Childs, David T. D.; Groom, Kristian M.; Hogg, Richard A.; Revin, Dmitry G.; Cockburn, John W.; Rehman, Ihtesham U.; Matcher, Stephen J.

    2016-03-01

    Infrared spectroscopy is a highly attractive read-out technology for compositional analysis of biomedical specimens because of its unique combination of high molecular sensitivity without the need for exogenous labels. Traditional techniques such as FTIR and Raman have suffered from comparatively low speed and sensitivity however recent innovations are challenging this situation. Direct mid-IR spectroscopy is being speeded up by innovations such as MEMS-based FTIR instruments with very high mirror speeds and supercontinuum sources producing very high sample irradiation levels. Here we explore another possible method - external cavity quantum cascade lasers (EC-QCL's) with high cavity tuning speeds (mid-IR swept lasers). Swept lasers have been heavily developed in the near-infrared where they are used for non-destructive low-coherence imaging (OCT). We adapt these concepts in two ways. Firstly by combining mid-IR quantum cascade gain chips with external cavity designs adapted from OCT we achieve spectral acquisition rates approaching 1 kHz and demonstrate potential to reach 100 kHz. Secondly we show that mid-IR swept lasers share a fundamental sensitivity advantage with near-IR OCT swept lasers. This makes them potentially able to achieve the same spectral SNR as an FTIR instrument in a time x N shorter (N being the number of spectral points) under otherwise matched conditions. This effect is demonstrated using measurements of a PDMS sample. The combination of potentially very high spectral acquisition rates, fundamental SNR advantage and the use of low-cost detector systems could make mid-IR swept lasers a powerful technology for high-throughput biomedical spectroscopy.

  3. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  4. Analysis of Sensitivity Experiments - An Expanded Primer

    Science.gov (United States)

    2017-03-08

    conducted with this purpose in mind. Due diligence must be paid to the structure of the dosage levels and to the number of trials. The chosen data...analysis. System reliability is of paramount importance for protecting both the investment of funding and human life . Failing to accurately estimate

  5. Sensitivity analysis of hybrid thermoelastic techniques

    Science.gov (United States)

    W.A. Samad; J.M. Considine

    2017-01-01

    Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...

  6. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  7. Sensitive high performance liquid chromatographic method for the ...

    African Journals Online (AJOL)

    A new simple, sensitive, cost-effective and reproducible high performance liquid chromatographic (HPLC) method for the determination of proguanil (PG) and its metabolites, cycloguanil (CG) and 4-chlorophenylbiguanide (4-CPB) in urine and plasma is described. The extraction procedure is a simple three-step process ...

  8. Methylation-Sensitive High Resolution Melting (MS-HRM).

    Science.gov (United States)

    Hussmann, Dianna; Hansen, Lise Lotte

    2018-01-01

    Methylation-Sensitive High Resolution Melting (MS-HRM) is an in-tube, PCR-based method to detect methylation levels at specific loci of interest. A unique primer design facilitates a high sensitivity of the assays enabling detection of down to 0.1-1% methylated alleles in an unmethylated background.Primers for MS-HRM assays are designed to be complementary to the methylated allele, and a specific annealing temperature enables these primers to anneal both to the methylated and the unmethylated alleles thereby increasing the sensitivity of the assays. Bisulfite treatment of the DNA prior to performing MS-HRM ensures a different base composition between methylated and unmethylated DNA, which is used to separate the resulting amplicons by high resolution melting.The high sensitivity of MS-HRM has proven useful for detecting cancer biomarkers in a noninvasive manner in urine from bladder cancer patients, in stool from colorectal cancer patients, and in buccal mucosa from breast cancer patients. MS-HRM is a fast method to diagnose imprinted diseases and to clinically validate results from whole-epigenome studies. The ability to detect few copies of methylated DNA makes MS-HRM a key player in the quest for establishing links between environmental exposure, epigenetic changes, and disease.

  9. Aluminum nano-cantilevers for high sensitivity mass sensors

    DEFF Research Database (Denmark)

    Davis, Zachary James; Boisen, Anja

    2005-01-01

    We have fabricated Al nano-cantilevers using a very simple one mask contact UV lithography technique with lateral dimensions under 500 nm and vertical dimensions of approximately 100 nm. These devices are demonstrated as highly sensitive mass sensors by measuring their dynamic properties. Further...

  10. High sensitivity probe absorption technique for time-of-flight ...

    Indian Academy of Sciences (India)

    Abstract. We report on a phase-sensitive probe absorption technique with high sen- sitivity, capable of detecting a few hundred ultra-cold atoms in flight in an observation time of a few milliseconds. The large signal-to-noise ratio achieved is sufficient for reliable measurements on low intensity beams of cold atoms.

  11. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  12. Improved sensitivity of ochratoxin A analysis in coffee using high-performance liquid chromatography with hybrid triple quadrupole-linear ion trap mass spectrometry (LC-QqQLIT-MS/MS).

    Science.gov (United States)

    Kokina, Aija; Pugajeva, Iveta; Bartkevics, Vadims

    2016-01-01

    A novel and sensitive method utilising high-performance liquid chromatography coupled to triple quadrupole-linear ion trap mass spectrometry (LC-QqQLIT-MS/MS) was developed in order to analyse the content of ochratoxin A (OTA) in coffee samples. The introduction of the triple-stage MS scanning mode (MS(3)) has been shown to increase greatly sensitivity and selectivity by eliminating the high chromatographic baseline caused by interference of complex coffee matrices. The analysis included the sample preparation procedure involving extraction of OTA using a methanol-water mixture and clean-up by immunoaffinity columns and detection using the MS(3) scanning mode of LC-QqQLIT-MS/MS. The proposed method offered a good linear correlation (r(2) > 0.998), excellent precision (RSD coffee beans and espresso beverages was 0.010 and 0.003 µg kg(-1), respectively. The developed procedure was compared with traditional methods employing liquid chromatography coupled to fluorescent and tandem quadrupole detectors in conjunction with QuEChERS and solid-phase extraction. The proposed method was successfully applied to the determination of OTA in 15 samples of coffee beans and in 15 samples of espresso coffee beverages obtained from the Latvian market. OTA was found in 10 samples of coffee beans and in two samples of espresso in the ranges of 0.018-1.80 µg kg(-1) and 0.020-0.440 µg l(-1), respectively. No samples exceeded the maximum permitted level of OTA in the European Union (5.0 µg kg(-1)).

  13. High-resolution, high-sensitivity NMR of nano-litre anisotropic samples by coil spinning

    Energy Technology Data Exchange (ETDEWEB)

    Sakellariou, D [CEA Saclay, DSM, DRECAM, SCM, Lab Struct and Dynam Resonance Magnet, CNRS URA 331, F-91191 Gif Sur Yvette, (France); Le Goff, G; Jacquinot, J F [CEA Saclay, DSM, DRECAM, SPEC: Serv Phys Etat Condense, CNRS URA 2464, F-91191 Gif Sur Yvette, (France)

    2007-07-01

    Nuclear magnetic resonance (NMR) can probe the local structure and dynamic properties of liquids and solids, making it one of the most powerful and versatile analytical methods available today. However, its intrinsically low sensitivity precludes NMR analysis of very small samples - as frequently used when studying isotopically labelled biological molecules or advanced materials, or as preferred when conducting high-throughput screening of biological samples or 'lab-on-a-chip' studies. The sensitivity of NMR has been improved by using static micro-coils, alternative detection schemes and pre-polarization approaches. But these strategies cannot be easily used in NMR experiments involving the fast sample spinning essential for obtaining well-resolved spectra from non-liquid samples. Here we demonstrate that inductive coupling allows wireless transmission of radio-frequency pulses and the reception of NMR signals under fast spinning of both detector coil and sample. This enables NMR measurements characterized by an optimal filling factor, very high radio-frequency field amplitudes and enhanced sensitivity that increases with decreasing sample volume. Signals obtained for nano-litre-sized samples of organic powders and biological tissue increase by almost one order of magnitude (or, equivalently, are acquired two orders of magnitude faster), compared to standard NMR measurements. Our approach also offers optimal sensitivity when studying samples that need to be confined inside multiple safety barriers, such as radioactive materials. In principle, the co-rotation of a micrometer-sized detector coil with the sample and the use of inductive coupling (techniques that are at the heart of our method) should enable highly sensitive NMR measurements on any mass-limited sample that requires fast mechanical rotation to obtain well-resolved spectra. The method is easy to implement on a commercial NMR set-up and exhibits improved performance with miniaturization, and we

  14. NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method

    International Nuclear Information System (INIS)

    Ogbomo, Henry; Hahn, Anke; Geiler, Janina; Michaelis, Martin; Doerr, Hans Wilhelm; Cinatl, Jindrich

    2006-01-01

    The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe, and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients

  15. Instruction manual for ORNL tandem high abundance sensitivity mass spectrometer

    International Nuclear Information System (INIS)

    Smith, D.H.; McKown, H.S.; Chrisite, W.H.; Walker, R.L.; Carter, J.A.

    1976-06-01

    This manual describes the physical characteristics of the tandem mass spectrometer built by Oak Ridge National Laboratory for the International Atomic Energy Agency. Specific requirements met include ability to run small samples, high abundance sensitivity, good precision and accuracy, and adequate sample throughput. The instrument is capable of running uranium samples as small as 10 -12 g and has an abundance sensitivity in excess of 10 6 . Precision and accuracy are enhanced by a special sweep control circuit. Sample throughput is 6 to 12 samples per day. Operating instructions are also given

  16. A highly sensitive and specific assay for vertebrate collagenase

    International Nuclear Information System (INIS)

    Sodek, J.; Hurum, S.; Feng, J.

    1981-01-01

    A highly sensitive and specific assay for vertebrate collagenase has been developed using a [ 14 C]-labeled collagen substrate and a combination of SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) and fluorography to identify and quantitate the digestion products. The assay was sufficiently sensitive to permit the detection and quantitation of collagenase activity in 0.1 μl of gingival sulcal fluid, and in samples of cell culture medium without prior concentration. The assay has also been used to detect the presence of inhibitors of collagenolytic enzymes in various cell culture fluids. (author)

  17. Dispersion sensitivity analysis & consistency improvement of APFSDS

    Directory of Open Access Journals (Sweden)

    Sangeeta Sharma Panda

    2017-08-01

    In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.

  18. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  19. Robust and sensitive analysis of mouse knockout phenotypes.

    Directory of Open Access Journals (Sweden)

    Natasha A Karp

    Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.

  20. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  1. A sensitivity analysis approach to optical parameters of scintillation detectors

    International Nuclear Information System (INIS)

    Ghal-Eh, N.; Koohi-Fayegh, R.

    2008-01-01

    In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators

  2. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  3. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  4. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  5. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  6. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  7. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  8. COSMO-SkyMed Very High Resolution Data in support of Key Site Monitoring: A novel approach for characterization of sensitive areas and change direction based on VHR-SAR Coherent Multi-temporal Analysis

    International Nuclear Information System (INIS)

    Britti, F.; Cesarano, L.; Costantini, M.; Gentile, V.; Minati, F.; Pietranera, L.

    2013-01-01

    The COSMO-SkyMed Constellation, four VHR Earth Observation SAR satellites, can be an extremely useful source of information for monitoring programs, and in particular for monitoring of nuclear facilities safeguards, ranging from environmental analysis to human activity characterization. Thanks to its very high revisit coupled with the all weather capability and its dawn to dusk operations, the COSMO-SkyMed constellation is an ideal tool for improving already existing VHR (Very High Resolution) optical satellites monitoring by enhancing classical change detection activities. Thanks to its multi-mode acquisition capability with resolution up to one meter, the COSMO-SkyMed constellation can cover large areas in a very short time to monitor nuclear sites and surrounding areas, thereby providing additional information for the potential detection of undeclared nuclear activities. In particular, thanks to the interferometric capabilities of the SAR sensor, coherence analysis introduces additional information closely related to the changes occurred and occurring over the area of interest within the desired time interval (up to one day at best conditions). Indeed, thanks to the high sensitivity to variations of this added-value product, available only with SAR data, guaranteed by the wavelength used by COSMO-SkyMed sensors (3 cm), in-time analysis through coherence can be a strong indicator of human activity, particularly over areas characterized by a stable environment (i.e. coherent areas), such as deserts/arid zones or ice or snow-covered areas. The aim of this work is to provide a detailed description of how COSMO-SkyMed data and e-GEOS added-value products are able to improve intelligence analysis over critical sites (and their surrounding areas), allowing: -) enhanced change detection through both amplitude and coherence information, -) high frequency site monitoring, -) data integration with other sources of information (optical or on-ground measurements). e-GEOS, a

  9. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  10. Are inflationary predictions sensitive to very high energy physics?

    International Nuclear Information System (INIS)

    Burgess, C.P.; Lemieux, F.; Holman, R.; Cline, J.M.

    2003-01-01

    It has been proposed that the successful inflationary description of density perturbations on cosmological scales is sensitive to the details of physics at extremely high (trans-Planckian) energies. We test this proposal by examining how inflationary predictions depend on higher-energy scales within a simple model where the higher-energy physics is well understood. We find the best of all possible worlds: inflationary predictions are robust against the vast majority of high-energy effects, but can be sensitive to some effects in certain circumstances, in a way which does not violate ordinary notions of decoupling. This implies both that the comparison of inflationary predictions with CMB data is meaningful, and that it is also worth searching for small deviations from the standard results in the hopes of learning about very high energies. (author)

  11. Design of highly sensitive multichannel bimetallic photonic crystal fiber biosensor

    Science.gov (United States)

    Hameed, Mohamed Farhat O.; Alrayk, Yassmin K. A.; Shaalan, Abdelhamid A.; El Deeb, Walid S.; Obayya, Salah S. A.

    2016-10-01

    A design of a highly sensitive multichannel biosensor based on photonic crystal fiber is proposed and analyzed. The suggested design has a silver layer as a plasmonic material coated by a gold layer to protect silver oxidation. The reported sensor is based on detection using the quasi transverse electric (TE) and quasi transverse magnetic (TM) modes, which offers the possibility of multichannel/multianalyte sensing. The numerical results are obtained using a finite element method with perfect matched layer boundary conditions. The sensor geometrical parameters are optimized to achieve high sensitivity for the two polarized modes. High-refractive index sensitivity of about 4750 nm/RIU (refractive index unit) and 4300 nm/RIU with corresponding resolutions of 2.1×10-5 RIU, and 2.33×10-5 RIU can be obtained according to the quasi TM and quasi TE modes of the proposed sensor, respectively. Further, the reported design can be used as a self-calibration biosensor within an unknown analyte refractive index ranging from 1.33 to 1.35 with high linearity and high accuracy. Moreover, the suggested biosensor has advantages in terms of compactness and better integration of microfluidics setup, waveguide, and metallic layers into a single structure.

  12. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    Science.gov (United States)

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  13. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    Science.gov (United States)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  14. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  15. Introducing wet aerosols into the static high sensitivity ICP (SHIP)

    Energy Technology Data Exchange (ETDEWEB)

    Scheffer, Andy; Engelhard, Carsten; Sperling, Michael; Buscher, Wolfgang [University of Muenster, Institute of Inorganic and Analytical Chemistry, Muenster (Germany)

    2007-08-15

    A demountable design of the static high sensitivity ICP (SHIP) for optical emission spectrometry is presented, and its use as an excitation source with the introduction of wet aerosols was investigated. Aerosols were produced by standard pneumatic sample introduction systems, namely a cross flow nebulizer, Meinhard nebulizer and PFA low flow nebulizer, which have been applied in conjunction with a double pass and a cyclonic spray chamber. The analytical capabilities of these sample introduction systems in combination with the SHIP system were evaluated with respect to the achieved sensitivity. It was found that a nebulizer tailored for low argon flow rates (0.3-0.5 L min{sup -1}) is best suited for the low flow plasma (SHIP). An optimization of all gas flow rates of the SHIP system with the PFA low flow nebulizer was carried out in a two-dimensional way with the signal to background ratio (SBR) and the robustness as optimization target parameters. Optimum conditions for a torch model with 1-mm injector tube were 0.25 and 0.36 L min{sup -1} for the plasma gas and the nebulizer gas, respectively. A torch model with a 2-mm injector tube was optimized to 0.4 L min{sup -1} for the plasma gas and 0.44 L min{sup -1} for the nebulizer gas. In both cases the SHIP system saves approximately 95% of the argon consumed by conventional inductively coupled plasma systems. The limits of detection were found to be in the low microgram per litre range and below for many elements, which was quite comparable to those of the conventional setup. Furthermore, the short-term stability and the wash out behaviour of the SHIP were investigated. Direct comparison with the conventional setup indicated that no remarkable memory effects were caused by the closed design of the torch. The analysis of a NIST SRM 1643e (Trace Elements in Water) with the SHIP yielded recoveries of 97-103% for 13 elements, measured simultaneously. (orig.)

  16. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  17. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  18. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  19. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  20. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  1. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  2. Sensitivity Study on Analysis of Reactor Containment Response to LOCA

    International Nuclear Information System (INIS)

    Chung, Ku Young; Sung, Key Yong

    2010-01-01

    As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code

  3. Sensitivity Study on Analysis of Reactor Containment Response to LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ku Young; Sung, Key Yong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2010-10-15

    As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code

  4. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  5. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  6. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  7. An evaluation of a single-step extraction chromatography separation method for Sm-Nd isotope analysis of micro-samples of silicate rocks by high-sensitivity thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Li Chaofeng; Li Xianhua; Li Qiuli; Guo Jinghui; Li Xianghui; Liu Tao

    2011-01-01

    Graphical abstract: Distribution curve of all eluting fractions for a BCR-2 (1-2-3.5-7 mg) on LN column using HCl and HF as eluting reagent. Highlights: → This analytical protocol affords a simple and rapid analysis for Sm and Nd isotope in minor rock samples. → The single-step separation method exhibits satisfactory separation effect for complex silicate samples. → Corrected 143 Nd/ 144 Nd data show excellent accuracy even if the 140 Ce 16 O + / 144 Nd 16 O + ratio reached to 0.03. - Abstract: A single-step separation scheme is presented for Sm-Nd radiogenic isotope system on very small samples (1-3 mg) of silicate rock. This method is based on Eichrom LN Spec chromatographic material and affords a straightforward separation of Sm-Nd from complex matrix with good purity and satisfactory blank levels, suitable for thermal ionization mass spectrometry (TIMS). This technique, characterized by high efficiency (single-step Sm-Nd separation) and high sensitivity (TIMS on NdO + ion beam), is able to process rapidly (3-4 h), with low procedure blanks ( 143 Nd/ 144 Nd ratios and Sm-Nd concentrations are presented for eleven international silicate rock reference materials, spanning a wide range of Sm-Nd contents and bulk compositions. The analytical results show a good agreement with recommended values within ±0.004% for the 143 Nd/ 144 Nd isotopic ratio and ±2% for Sm-Nd quantification at the 95% confidence level. It is noted that the uncertainty of this method is about 3 times larger than typical precision achievable with two-stage full separation followed by state-of-the-art conventional TIMS using Nd + ion beams which require much larger amounts of Nd. Hence, our single-step separation followed by NdO + ion beam technique is preferred to the analysis for microsamples.

  8. Layer-by-Layer-Assembled AuNPs-Decorated First-Generation Poly(amidoamine) Dendrimer with Reduced Graphene Oxide Core as Highly Sensitive Biosensing Platform with Controllable 3D Nanoarchitecture for Rapid Voltammetric Analysis of Ultratrace DNA Hybridization.

    Science.gov (United States)

    Jayakumar, Kumarasamy; Camarada, María Belén; Dharuman, Venkataraman; Rajesh, Rajendiran; Venkatesan, Rengarajan; Ju, Huangxian; Maniraj, Mahalingam; Rai, Abhishek; Barman, Sudipta Roy; Wen, Yangping

    2018-06-12

    The structure and electrochemical properties of layer-by-layer-assembled gold nanoparticles (AuNPs)-decorated first-generation (G1) poly(amidoamine) dendrimer (PD) with reduced graphene oxide (rGO) core as a highly sensitive and label-free biosensing platform with a controllable three-dimensional (3D) nanoarchitecture for the rapid voltammetric analysis of DNA hybridization at ultratrace levels were characterized. Mercaptopropinoic acid (MPA) was self-assembled onto Au substrate, then GG1PD formed by the covalent functionalization between the amino terminals of G1PD and carboxyl terminals of rGO was covalently linked onto MPA, and finally AuNPs were decorated onto GG1PD by strong physicochemical interaction between AuNPs and -OH of rGO in GG1PD, which was characterized through different techniques and confirmed by computational calculation. This 3D controllable thin-film electrode was optimized and evaluated using [Fe(CN) 6 ] 3-/4- as the redox probe and employed to covalently immobilize thiol-functionalized single-stranded DNA as biorecognition element to form the DNA nanobiosensor, which achieved fast, ultrasensitive, and high-selective differential pulse voltammetric analysis of DNA hybridization in a linear range from 1 × 10 -6 to 1 × 10 -13 g m -1 with a low detection limit of 9.07 × 10 -14 g m -1 . This work will open a new pathway for the controllable 3D nanoarchitecture of the layer-by-layer-assembled metal nanoparticles-functionalized lower-generation PD with two-dimensional layered nanomaterials as cores that can be employed as ultrasensitive and label-free nanobiodevices for the fast diagnosis of specific genome diseases in the field of biomedicine.

  9. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  10. Sensitivity analysis for modules for various biosphere types

    International Nuclear Information System (INIS)

    Karlsson, Sara; Bergstroem, U.; Rosen, K.

    2000-09-01

    This study presents the results of a sensitivity analysis for the modules developed earlier for calculation of ecosystem specific dose conversion factors (EDFs). The report also includes a comparison between the probabilistically calculated mean values of the EDFs and values gained in deterministic calculations. An overview of the distribution of radionuclides between different environmental parts in the models is also presented. The radionuclides included in the study were 36 Cl, 59 Ni, 93 Mo, 129 I, 135 Cs, 237 Np and 239 Pu, sel to represent various behaviour in the biosphere and some are of particular importance from the dose point of view. The deterministic and probabilistic EDFs showed a good agreement, for most nuclides and modules. Exceptions from this occurred if very skew distributions were used for parameters of importance for the results. Only a minor amount of the released radionuclides were present in the model compartments for all modules, except for the agricultural land module. The differences between the radionuclides were not pronounced which indicates that nuclide specific parameters were of minor importance for the retention of radionuclides for the simulated time period of 10 000 years in those modules. The results from the agricultural land module showed a different pattern. Large amounts of the radionuclides were present in the solid fraction of the saturated soil zone. The high retention within this compartment makes the zone a potential source for future exposure. Differences between the nuclides due to element specific Kd-values could be seen. The amount of radionuclides present in the upper soil layer, which is the most critical zone for exposure to humans, was less then 1% for all studied radionuclides. The sensitivity analysis showed that the physical/chemical parameters were the most important in most modules in contrast to the dominance of biological parameters in the uncertainty analysis. The only exception was the well module where

  11. Fast, rugged and sensitive ultra high pressure liquid chromatography tandem mass spectrometry method for analysis of cyanotoxins in raw water and drinking water--First findings of anatoxins, cylindrospermopsins and microcystin variants in Swedish source waters and infiltration ponds.

    Science.gov (United States)

    Pekar, Heidi; Westerberg, Erik; Bruno, Oscar; Lääne, Ants; Persson, Kenneth M; Sundström, L Fredrik; Thim, Anna-Maria

    2016-01-15

    Freshwater blooms of cyanobacteria (blue-green algae) in source waters are generally composed of several different strains with the capability to produce a variety of toxins. The major exposure routes for humans are direct contact with recreational waters and ingestion of drinking water not efficiently treated. The ultra high pressure liquid chromatography tandem mass spectrometry based analytical method presented here allows simultaneous analysis of 22 cyanotoxins from different toxin groups, including anatoxins, cylindrospermopsins, nodularin and microcystins in raw water and drinking water. The use of reference standards enables correct identification of toxins as well as precision of the quantification and due to matrix effects, recovery correction is required. The multi-toxin group method presented here, does not compromise sensitivity, despite the large number of analytes. The limit of quantification was set to 0.1 μg/L for 75% of the cyanotoxins in drinking water and 0.5 μg/L for all cyanotoxins in raw water, which is compliant with the WHO guidance value for microcystin-LR. The matrix effects experienced during analysis were reasonable for most analytes, considering the large volume injected into the mass spectrometer. The time of analysis, including lysing of cell bound toxins, is less than three hours. Furthermore, the method was tested in Swedish source waters and infiltration ponds resulting in evidence of presence of anatoxin, homo-anatoxin, cylindrospermopsin and several variants of microcystins for the first time in Sweden, proving its usefulness. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Development of High Sensitivity Nuclear Emulsion and Fine Grained Emulsion

    Science.gov (United States)

    Kawahara, H.; Asada, T.; Naka, T.; Naganawa, N.; Kuwabara, K.; Nakamura, M.

    2014-08-01

    Nuclear emulsion is a particle detector having high spacial resolution and angular resolution. It became useful for large statistics experiment thanks to the development of automatic scanning system. In 2010, a facility for emulsion production was introduced and R&D of nuclear emulsion began at Nagoya university. In this paper, we present results of development of the high sensitivity emulsion and fine grained emulsion for dark matter search experiment. Improvement of sensitivity is achieved by raising density of silver halide crystals and doping well-adjusted amount of chemicals. Production of fine grained emulsion was difficult because of unexpected crystal condensation. By mixing polyvinyl alcohol (PVA) to gelatin as a binder, we succeeded in making a stable fine grained emulsion.

  13. Development of High Sensitivity Nuclear Emulsion and Fine Grained Emulsion

    International Nuclear Information System (INIS)

    Kawahara, H.; Asada, T.; Naka, T.; Naganawa, N.; Kuwabara, K.; Nakamura, M.

    2014-01-01

    Nuclear emulsion is a particle detector having high spacial resolution and angular resolution. It became useful for large statistics experiment thanks to the development of automatic scanning system. In 2010, a facility for emulsion production was introduced and R and D of nuclear emulsion began at Nagoya university. In this paper, we present results of development of the high sensitivity emulsion and fine grained emulsion for dark matter search experiment. Improvement of sensitivity is achieved by raising density of silver halide crystals and doping well-adjusted amount of chemicals. Production of fine grained emulsion was difficult because of unexpected crystal condensation. By mixing polyvinyl alcohol (PVA) to gelatin as a binder, we succeeded in making a stable fine grained emulsion

  14. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  15. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  16. A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors

    Directory of Open Access Journals (Sweden)

    Xi Yu

    2014-01-01

    Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.

  17. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  18. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  19. Recent trends in high spin sensitivity magnetic resonance

    Science.gov (United States)

    Blank, Aharon; Twig, Ygal; Ishay, Yakir

    2017-07-01

    new ideas, show how these limiting factors can be mitigated to significantly improve the sensitivity of induction detection. Finally, we outline some directions for the possible applications of high-sensitivity induction detection in the field of electron spin resonance.

  20. Antibody Desensitization Therapy in Highly Sensitized Lung Transplant Candidates

    Science.gov (United States)

    Snyder, L. D.; Gray, A. L.; Reynolds, J. M.; Arepally, G. M.; Bedoya, A.; Hartwig, M. G.; Davis, R. D.; Lopes, K. E.; Wegner, W. E.; Chen, D. F.; Palmer, S. M.

    2015-01-01

    As HLAs antibody detection technology has evolved, there is now detailed HLA antibody information available on prospective transplant recipients. Determining single antigen antibody specificity allows for a calculated panel reactive antibodies (cPRA) value, providing an estimate of the effective donor pool. For broadly sensitized lung transplant candidates (cPRA ≥ 80%), our center adopted a pretransplant multimodal desensitization protocol in an effort to decrease the cPRA and expand the donor pool. This desensitization protocol included plasmapheresis, solumedrol, bortezomib and rituximab given in combination over 19 days followed by intravenous immunoglobulin. Eight of 18 candidates completed therapy with the primary reasons for early discontinuation being transplant (by avoiding unacceptable antigens) or thrombocytopenia. In a mixed-model analysis, there were no significant changes in PRA or cPRA changes over time with the protocol. A sub-analysis of the median fluorescence intensity (MFI) change indicated a small decline that was significant in antibodies with MFI 5000–10 000. Nine of 18 candidates subsequently had a transplant. Posttransplant survival in these nine recipients was comparable to other pretransplant-sensitized recipients who did not receive therapy. In summary, an aggressive multi-modal desensitization protocol does not significantly reduce pretransplant HLA antibodies in a broadly sensitized lung transplant candidate cohort. PMID:24666831

  1. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  2. High-sensitivity bend angle measurements using optical fiber gratings.

    Science.gov (United States)

    Rauf, Abdul; Zhao, Jianlin; Jiang, Biqiang

    2013-07-20

    We present a high-sensitivity and more flexible bend measurement method, which is based on the coupling of core mode to the cladding modes at the bending region in concatenation with optical fiber grating serving as band reflector. The characteristics of a bend sensing arm composed of bending region and optical fiber grating is examined for different configurations including single fiber Bragg grating (FBG), chirped FBG (CFBG), and double FBGs. The bend loss curves for coated, stripped, and etched sections of fiber in the bending region with FBG, CFBG, and double FBG are obtained experimentally. The effect of separation between bending region and optical fiber grating on loss is measured. The loss responses for single FBG and CFBG configurations are compared to discover the effectiveness for practical applications. It is demonstrated that the sensitivity of the double FBG scheme is twice that of the single FBG and CFBG configurations, and hence acts as sensitivity multiplier. The bend loss response for different fiber diameters obtained through etching in 40% hydrofluoric acid, is measured in double FBG scheme that resulted in a significant increase in the sensitivity, and reduction of dead-zone.

  3. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  4. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  5. High-sensitivity troponin assays for the early rule-out or diagnosis of acute myocardial infarction in people eith acute chest pain: a systematic review and cost-effectiveness analysis

    NARCIS (Netherlands)

    M. Westwood (Marie); T. van Asselt (Thea); B. Ramaekers (Bram); P. Whiting (Penny); P. Tokala (Praveen); M.A. Joore (Manuela); N. Armstrong (Nigel); J. Ross (Janine); J.L. Severens (Hans); J. Kleijnen (Jos)

    2015-01-01

    textabstractBackground: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of

  6. High-sensitivity troponin assays for the early rule-out or diagnosis of acute myocardial infarction in people with acute chest pain : a systematic review and cost-effectiveness analysis

    NARCIS (Netherlands)

    Westwood, Marie; van Asselt, Thea; Ramaekers, Bram; Whiting, Penny; Thokala, Praveen; Joore, Manuela; Armstrong, Nigel; Ross, Janine; Severens, Johan; Kleijnen, Jos

    BACKGROUND: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of unnecessary

  7. Towards highly sensitive strain sensing based on nanostructured materials

    International Nuclear Information System (INIS)

    Dao, Dzung Viet; Nakamura, Koichi; Sugiyama, Susumu; Bui, Tung Thanh; Dau, Van Thanh; Yamada, Takeo; Hata, Kenji

    2010-01-01

    This paper presents our recent theoretical and experimental study of piezo-effects in nanostructured materials for highly sensitive, high resolution mechanical sensors. The piezo-effects presented here include the piezoresistive effect in a silicon nanowire (SiNW) and single wall carbon nanotube (SWCNT) thin film, as well as the piezo-optic effect in a Si photonic crystal (PhC) nanocavity. Firstly, the electronic energy band structure of the silicon nanostructure is discussed and simulated by using the First-Principles Calculations method. The result showed a remarkably different energy band structure compared with that of bulk silicon. This difference in the electronic state will result in different physical, chemical, and therefore, sensing properties of silicon nanostructures. The piezoresistive effects of SiNW and SWCNT thin film were investigated experimentally. We found that, when the width of ( 110 ) p-type SiNW decreases from 500 to 35 nm, the piezoresistive effect increases by more than 60%. The longitudinal piezoresistive coefficient of SWCNT thin film was measured to be twice that of bulk p-type silicon. Finally, theoretical investigations of the piezo-optic effect in a PhC nanocavity based on Finite Difference Time Domain (FDTD) showed extremely high resolution strain sensing. These nanostructures were fabricated based on top-down nanofabrication technology. The achievements of this work are significant for highly sensitive, high resolution and miniaturized mechanical sensors

  8. Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2005-01-01

    This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning

  9. The identification of model effective dimensions using global sensitivity analysis

    International Nuclear Information System (INIS)

    Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang

    2011-01-01

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  10. The identification of model effective dimensions using global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)

    2011-04-15

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  11. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik

    2009-01-01

    satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....

  12. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  13. Sensitivity analysis of network DEA illustrated in branch banking

    OpenAIRE

    N. Avkiran

    2010-01-01

    Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...

  14. Considering Respiratory Tract Infections and Antimicrobial Sensitivity: An Exploratory Analysis

    Directory of Open Access Journals (Sweden)

    Amin, R.

    2009-01-01

    Full Text Available This study was conducted to observe the sensitivity and resistance of status of antibiotics for respiratory tract infection (RTI. Throat swab culture and sensitivity report of 383 patients revealed sensitivity profiles were observed with amoxycillin (7.9%, penicillin (33.7%, ampicillin (36.6%, co-trimoxazole (46.5%, azithromycin (53.5%, erythromycin (57.4%, cephalexin (69.3%, gentamycin (78.2%, ciprofloxacin (80.2%, cephradine (81.2%, ceftazidime (93.1%, ceftriaxone (93.1%. Sensitivity to cefuroxime was reported 93.1% cases. Resistance was found with amoxycillin (90.1%, ampicillin (64.1%, penicillin (61.4%, co-trimoxazole (43.6%, erythromycin (39.6%, and azithromycin (34.7%. Cefuroxime demonstrates high level of sensitivity than other antibiotics and supports its consideration with patients with upper RTI.

  15. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    Science.gov (United States)

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  16. Sensitivity Analysis for the CLIC Damping Ring Inductive Adder

    CERN Document Server

    Holma, Janne

    2012-01-01

    The CLIC study is exploring the scheme for an electron-positron collider with high luminosity and a nominal centre-of-mass energy of 3 TeV. The CLIC pre-damping rings and damping rings will produce, through synchrotron radiation, ultra-low emittance beam with high bunch charge, necessary for the luminosity performance of the collider. To limit the beam emittance blow-up due to oscillations, the pulse generators for the damping ring kickers must provide extremely flat, high-voltage pulses. The specifications for the extraction kickers of the CLIC damping rings are particularly demanding: the flattop of the output pulse must be 160 ns duration, 12.5 kV and 250 A, with a combined ripple and droop of not more than ±0.02 %. An inductive adder allows the use of different modulation techniques and is therefore a very promising approach to meeting the specifications. PSpice has been utilised to carry out a sensitivity analysis of the predicted output pulse to the value of both individual and groups of circuit compon...

  17. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.

    1992-01-01

    The work done on this project was focussed mainly on LAMPF experiment E969 known as the MEGA experiment, a high sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ→eγ]/[μ→e ν μ ν e ] ∼10 -13 is over two orders of magnitude better than previously reported values. The work done on MEGA during this period was divided between that done at Valparaiso University and that done at LAMPF. In addition, some contributions were made to a proposal to the LAMPF PAC to perform a precision measurement of the Michel ρ parameter, described below

  18. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    1994-01-01

    The work done on this project focused on two LAMPF experiments. The MEGA experiment is a high-sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → eγ]/[μ eν μ ν e ] ∼ 10 -13 , will be over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → eν μ ν e to test the predictions V-A theory of weak interactions. In this experiment the uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value. The detectors are operational, and data taking has begun

  19. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.

    1993-01-01

    The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay μ → e γ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → e γ]/[μ → ev μ v e ] ∼ 10 -13 , is over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → ev μ v e to test the V-A theory of weak interactions. The uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value

  20. High sensitive quench detection method using an integrated test wire

    International Nuclear Information System (INIS)

    Fevrier, A.; Tavergnier, J.P.; Nithart, H.; Kiblaire, M.; Duchateau, J.L.

    1981-01-01

    A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field

  1. Development of miniature γ dose rate monitor with high sensitivity

    International Nuclear Information System (INIS)

    Shi Huilu; Tuo Xianguo; Xi Dashun; Tang Rong; Mu Keliang; Yang Jianbo

    2009-01-01

    This paper introduces a miniature γ dose rate monitor with high sensitivity which design based on single chip microcomputer, it can continue monitoring γ dose rate and then choose wire or wireless communications to sent the monitoring data to host according to the actual conditions. It has two kinds of power supply system, AC power supply system and battery which can be chose by concrete circumstances. The design idea and implementation technology of hardware and software and the system structure of the monitor are detailed illustrated in this paper. The experimental results show that measurable range is 0.1 mR/h-200 mR/h, the sensitivity of γ is 90 cps/mR/h, dead time below 200 us, error of stability below ±10%. (authors)

  2. Polymer-Particle Pressure-Sensitive Paint with High Photostability

    Directory of Open Access Journals (Sweden)

    Yu Matsuda

    2016-04-01

    Full Text Available We propose a novel fast-responding and paintable pressure-sensitive paint (PSP based on polymer particles, i.e. polymer-particle (pp-PSP. As a fast-responding PSP, polymer-ceramic (PC-PSP is widely studied. Since PC-PSP generally consists of titanium (IV oxide (TiO2 particles, a large reduction in the luminescent intensity will occur due to the photocatalytic action of TiO2. We propose the usage of polymer particles instead of TiO2 particles to prevent the reduction in the luminescent intensity. Here, we fabricate pp-PSP based on the polystyrene particle with a diameter of 1 μm, and investigate the pressure- and temperature-sensitives, the response time, and the photostability. The performances of pp-PSP are compared with those of PC-PSP, indicating the high photostability with the other characteristics comparable to PC-PSP.

  3. Field test investigation of high sensitivity fiber optic seismic geophone

    Science.gov (United States)

    Wang, Meng; Min, Li; Zhang, Xiaolei; Zhang, Faxiang; Sun, Zhihui; Li, Shujuan; Wang, Chang; Zhao, Zhong; Hao, Guanghu

    2017-10-01

    Seismic reflection, whose measured signal is the artificial seismic waves ,is the most effective method and widely used in the geophysical prospecting. And this method can be used for exploration of oil, gas and coal. When a seismic wave travelling through the Earth encounters an interface between two materials with different acoustic impedances, some of the wave energy will reflect off the interface and some will refract through the interface. At its most basic, the seismic reflection technique consists of generating seismic waves and measuring the time taken for the waves to travel from the source, reflect off an interface and be detected by an array of geophones at the surface. Compared to traditional geophones such as electric, magnetic, mechanical and gas geophone, optical fiber geophones have many advantages. Optical fiber geophones can achieve sensing and signal transmission simultaneously. With the development of fiber grating sensor technology, fiber bragg grating (FBG) is being applied in seismic exploration and draws more and more attention to its advantage of anti-electromagnetic interference, high sensitivity and insensitivity to meteorological conditions. In this paper, we designed a high sensitivity geophone and tested its sensitivity, based on the theory of FBG sensing. The frequency response range is from 10 Hz to 100 Hz and the acceleration of the fiber optic seismic geophone is over 1000pm/g. sixteen-element fiber optic seismic geophone array system is presented and the field test is performed in Shengli oilfield of China. The field test shows that: (1) the fiber optic seismic geophone has a higher sensitivity than the traditional geophone between 1-100 Hz;(2) The low frequency reflection wave continuity of fiber Bragg grating geophone is better.

  4. A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager.

    Science.gov (United States)

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2011-10-01

    Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm(2) at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm(2). Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm(2) while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt.

  5. Highly sensitive detection of urinary cadmium to assess personal exposure

    Energy Technology Data Exchange (ETDEWEB)

    Argun, Avni A.; Banks, Ashley M.; Merlen, Gwendolynne; Tempelman, Linda A. [Giner, Inc., 89 Rumford Ave., Newton 02466, MA United States (United States); Becker, Michael F.; Schuelke, Thomas [Fraunhofer USA – CCL, 1449 Engineering Research Ct., East Lansing 48824, MI (United States); Dweik, Badawi M., E-mail: bdweik@ginerinc.com [Giner, Inc., 89 Rumford Ave., Newton 02466, MA United States (United States)

    2013-04-22

    Highlights: ► An electrochemical sensor capable of detecting cadmium at parts-per-billion levels in urine. ► A novel fabrication method for Boron-Doped Diamond (BDD) ultramicroelectrode (UME) arrays. ► Unique combination of BDD UME arrays and a differential pulse voltammetry algorithm. ► High sensitivity, high reproducibility, and very low noise levels. ► Opportunity for portable operation to assess on-site personal exposure. -- Abstract: A series of Boron-Doped Diamond (BDD) ultramicroelectrode arrays were fabricated and investigated for their performance as electrochemical sensors to detect trace level metals such as cadmium. The steady-state diffusion behavior of these sensors was validated using cyclic voltammetry followed by electrochemical detection of cadmium in water and in human urine to demonstrate high sensitivity (>200 μA ppb{sup −1} cm{sup −2}) and low background current (<4 nA). When an array of ultramicroelectrodes was positioned with optimal spacing, these BDD sensors showed a sigmoidal diffusion behavior. They also demonstrated high accuracy with linear dose dependence for quantification of cadmium in a certified reference river water sample from the U.S. National Institute of Standards and Technology (NIST) as well as in a human urine sample spiked with 0.25–1 ppb cadmium.

  6. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  7. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  8. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  9. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  10. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  11. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  12. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  13. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  14. Sensitivity analysis of water consumption in an office building

    Science.gov (United States)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  15. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  16. Operationalization of the Russian Version of Highly Sensitive Person Scale

    Directory of Open Access Journals (Sweden)

    Регина Вячеславовна Ершова

    2018-12-01

    Full Text Available The aim of the present study was to operationalize a Russian version of the Highly Sensitive Person Scale (HSPS. The empirical data were collected in two ways: active, through oral advertising and inviting those who wish to take part in the study (snowball technique and passive (placement of ads about taking part in a research in social networks VKontakte and Facebook. As a result, 350 university students (117 men, 233 women, an average age of 18,2 (± 1,7 applied to a research laboratory and filled out the HSPS questionnaire, and another 510 respondents (380 women, 130 men, average age 22,6 ( ± 7,9 filled the HSPS online. The results of the study did not confirm the one-dimensional model of the construct, proposed by Aron & Aron (1997, as well as the most commonly used in the English-language studies three-factor solution. The hierarchical claster and confirmatory analyses used in the operationalization procedure allowed us to conclude that the variance of the Russian version of HSPS is best described in the framework of a two-factor model including the two separate subscales: Ease of Excitation (EOE, Low threshold of sensitivity (LTS. Sensory Processing Sensitivity may be defined as an increased susceptibility to external and internal stimuli, realized through negative emotional responses and deep susceptibility (distress to excessive stimulation.

  17. A wide-bandwidth and high-sensitivity robust microgyroscope

    International Nuclear Information System (INIS)

    Sahin, Korhan; Sahin, Emre; Akin, Tayfun; Alper, Said Emre

    2009-01-01

    This paper reports a microgyroscope design concept with the help of a 2 degrees of freedom (DoF) sense mode to achieve a wide bandwidth without sacrificing mechanical and electronic sensitivity and to obtain robust operation against variations under ambient conditions. The design concept is demonstrated with a tuning fork microgyroscope fabricated with an in-house silicon-on-glass micromachining process. When the fabricated gyroscope is operated with a relatively wide bandwidth of 1 kHz, measurements show a relatively high raw mechanical sensitivity of 131 µV (° s −1 ) −1 . The variation in the amplified mechanical sensitivity (scale factor) of the gyroscope is measured to be less than 0.38% for large ambient pressure variations such as from 40 to 500 mTorr. The bias instability and angle random walk of the gyroscope are measured to be 131° h −1 and 1.15° h −1/2 , respectively

  18. Development of an underwater high sensitivity Cherenkov detector: Sea Urchin

    International Nuclear Information System (INIS)

    Camerini, U.; McGibney, D.; Roberts, A.

    1982-01-01

    The need for a high gain, high sensitivity Cherenkov light sensor to be used in a deep underwater muon and neutrino detector (DUMAND) array has led to the design of the Sea Urchin detector. In this design a spherical photocathode PMTis optically coupled through a glass hemisphere to a large number of glass spines, each of which is filled with a wavelength-shifting (WLS) solution of a high quantum efficiency phosphor. The Cherenkov radiation is absorbed in the spine, isotropically re-radiated at a longer wavelength, and a fraction of the fluorescent light is internally reflected in the spine, and guided to the photomultiplier concentrically located in the glass hemisphere. Experiments measuring the optical characteristics of the spines and computer programs simulating light transformation and detection cross sections are described. Overall optical gains in the range 5-10 are achieved. The WLS solution is inexpensive, and may have other applications. (orig.)

  19. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  20. High sensitivity of quick view capsule endoscopy for detection of small bowel Crohn's disease

    DEFF Research Database (Denmark)

    Halling, Morten Lee; Nathan, Torben; Kjeldsen, Jens

    2014-01-01

    Capsule endoscopy (CE) has a high sensitivity for diagnosing small bowel Crohn's disease, but video analysis is time consuming. The quick view (qv) function is an effective tool to reduce time consumption. The aim of this study was to determine the rate of missed small bowel ulcerations with qv-C...

  1. Highly Sensitive Flexible Magnetic Sensor Based on Anisotropic Magnetoresistance Effect.

    Science.gov (United States)

    Wang, Zhiguang; Wang, Xinjun; Li, Menghui; Gao, Yuan; Hu, Zhongqiang; Nan, Tianxiang; Liang, Xianfeng; Chen, Huaihao; Yang, Jia; Cash, Syd; Sun, Nian-Xiang

    2016-11-01

    A highly sensitive flexible magnetic sensor based on the anisotropic magnetoresistance effect is fabricated. A limit of detection of 150 nT is observed and excellent deformation stability is achieved after wrapping of the flexible sensor, with bending radii down to 5 mm. The flexible AMR sensor is used to read a magnetic pattern with a thickness of 10 μm that is formed by ferrite magnetic inks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. High sensitivity on-line monitor for radioactive effluent

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Toshimi [Tohoku Electric Power Co. Ltd., Sendai (Japan); Ishizuka, Akira; Abe, Eisuke; Inoue, Yasuhiko; Fujii, Masaaki; Kitaguchi, Hiroshi; Doi, Akira

    1983-04-01

    A new approach for a highly sensitive effluent monitor is presented. The free flow type monitor, which consists of a straightener, nozzle, monitoring section and ..gamma..-ray detector, is demonstrated to be effective in providing long term stability. The 160 start-and-stop cycles of effluent discharge were repeated in a 120-h testing period. Results showed a background increase was not observed for the free flow type monitor. The background count rate was calibrated to the lowest detection limit to be 2.2 x 10/sup -2/ Bq/ml for a 300 s measurement time.

  3. High-Sensitivity Measurement of Density by Magnetic Levitation.

    Science.gov (United States)

    Nemiroski, Alex; Kumar, A A; Soh, Siowling; Harburg, Daniel V; Yu, Hai-Dong; Whitesides, George M

    2016-03-01

    This paper presents methods that use Magnetic Levitation (MagLev) to measure very small differences in density of solid diamagnetic objects suspended in a paramagnetic medium. Previous work in this field has shown that, while it is a convenient method, standard MagLev (i.e., where the direction of magnetization and gravitational force are parallel) cannot resolve differences in density mm) because (i) objects close in density prevent each other from reaching an equilibrium height due to hard contact and excluded volume, and (ii) using weaker magnets or reducing the magnetic susceptibility of the medium destabilizes the magnetic trap. The present work investigates the use of weak magnetic gradients parallel to the faces of the magnets as a means of increasing the sensitivity of MagLev without destabilization. Configuring the MagLev device in a rotated state (i.e., where the direction of magnetization and gravitational force are perpendicular) relative to the standard configuration enables simple measurements along the axes with the highest sensitivity to changes in density. Manipulating the distance of separation between the magnets or the lengths of the magnets (along the axis of measurement) enables the sensitivity to be tuned. These modifications enable an improvement in the resolution up to 100-fold over the standard configuration, and measurements with resolution down to 10(-6) g/cm(3). Three examples of characterizing the small differences in density among samples of materials having ostensibly indistinguishable densities-Nylon spheres, PMMA spheres, and drug spheres-demonstrate the applicability of rotated Maglev to measuring the density of small (0.1-1 mm) objects with high sensitivity. This capability will be useful in materials science, separations, and quality control of manufactured objects.

  4. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  5. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  6. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  7. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  8. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  9. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  10. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  11. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  12. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  13. Sensitization trajectories in childhood revealed by using a cluster analysis

    DEFF Research Database (Denmark)

    Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik

    2017-01-01

    Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...

  14. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  15. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  16. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  17. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  18. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  19. Sensitivity Analysis Applied in Design of Low Energy Office Building

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik

    2008-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  20. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind

    2007-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  1. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  2. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  3. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  4. Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...

    African Journals Online (AJOL)

    This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...

  5. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  6. Sensitivity analysis of the Ohio phosphorus risk index

    Science.gov (United States)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  7. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...

  8. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    Science.gov (United States)

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  9. Weighting-Based Sensitivity Analysis in Causal Mediation Studies

    Science.gov (United States)

    Hong, Guanglei; Qin, Xu; Yang, Fan

    2018-01-01

    Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…

  10. Sensitivity analysis of railpad parameters on vertical railway track dynamics

    NARCIS (Netherlands)

    Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.

    2016-01-01

    This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad

  11. Methods for global sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  12. Sensitivity analysis on ultimate strength of aluminium stiffened panels

    DEFF Research Database (Denmark)

    Rigo, P.; Sarghiuta, R.; Estefen, S.

    2003-01-01

    This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees on ul...

  13. Sensitivity and specificity of coherence and phase synchronization analysis

    International Nuclear Information System (INIS)

    Winterhalder, Matthias; Schelter, Bjoern; Kurths, Juergen; Schulze-Bonhage, Andreas; Timmer, Jens

    2006-01-01

    In this Letter, we show that coherence and phase synchronization analysis are sensitive but not specific in detecting the correct class of underlying dynamics. We propose procedures to increase specificity and demonstrate the power of the approach by application to paradigmatic dynamic model systems

  14. Sensitivity Analysis of Structures by Virtual Distortion Method

    DEFF Research Database (Denmark)

    Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard

    1991-01-01

    are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...

  15. Design tradeoff studies and sensitivity analysis. Appendix B

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-25

    The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)

  16. A simple, tunable, and highly sensitive radio-frequency sensor.

    Science.gov (United States)

    Cui, Yan; Sun, Jiwei; He, Yuxi; Wang, Zheng; Wang, Pingshan

    2013-08-05

    We report a radio frequency (RF) sensor that exploits tunable attenuators and phase shifters to achieve high-sensitivity and broad band frequency tunability. Three frequency bands are combined to enable sensor operations from ∼20 MHz to ∼38 GHz. The effective quality factor ( Q eff ) of the sensor is as high as ∼3.8 × 10 6 with 200  μ l of water samples. We also demonstrate the measurement of 2-proponal-water-solution permittivity at 0.01 mole concentration level from ∼1 GHz to ∼10 GHz. Methanol-water solution and de-ionized water are used to calibrate the RF sensor for the quantitative measurements.

  17. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  19. Wide bandwidth transimpedance amplifier for extremely high sensitivity continuous measurements.

    Science.gov (United States)

    Ferrari, Giorgio; Sampietro, Marco

    2007-09-01

    This article presents a wide bandwidth transimpedance amplifier based on the series of an integrator and a differentiator stage, having an additional feedback loop to discharge the standing current from the device under test (DUT) to ensure an unlimited measuring time opportunity when compared to switched discharge configurations while maintaining a large signal amplification over the full bandwidth. The amplifier shows a flat response from 0.6 Hz to 1.4 MHz, the capability to operate with leakage currents from the DUT as high as tens of nanoamperes, and rail-to-rail dynamic range for sinusoidal current signals independent of the DUT leakage current. Also available is a monitor output of the stationary current to track experimental slow drifts. The circuit is ideal for noise spectral and impedance measurements of nanodevices and biomolecules when in the presence of a physiological medium and in all cases where high sensitivity current measurements are requested such as in scanning probe microscopy systems.

  20. Design of a charge sensitive preamplifier on high resistivity silicon

    International Nuclear Information System (INIS)

    Radeka, V.; Rehak, P.; Rescia, S.; Gatti, E.; Longoni, A.; Sampietro, M.; Holl, P.; Strueder, L.; Kemmer, J.

    1987-01-01

    A low noise, fast charge sensitive preamplifier was designed on high resistivity, detector grade silicon. It is built at the surface of a fully depleted region of n-type silicon. This allows the preamplifier to be placed very close to a detector anode. The preamplifier uses the classical input cascode configuration with a capacitor and a high value resistor in the feedback loop. The output stage of the preamplifier can drive a load up to 20pF. The power dissipation of the preamplifier is 13mW. The amplifying elements are ''Single Sided Gate JFETs'' developed especially for this application. Preamplifiers connected to a low capacitance anode of a drift type detector should achieve a rise time of 20ns and have an equivalent noise charge (ENC), after a suitable shaping, of less than 50 electrons. This performance translates to a position resolution better than 3μm for silicon drift detectors. 6 refs., 9 figs

  1. High?Sensitivity Troponin: A Clinical Blood Biomarker for Staging Cardiomyopathy in Fabry Disease

    OpenAIRE

    2016-01-01

    Background High?sensitivity troponin (hs?TNT), a biomarker of myocardial damage, might be useful for assessing fibrosis in Fabry cardiomyopathy. We performed a prospective analysis of hs?TNT as a biomarker for myocardial changes in Fabry patients and a retrospective longitudinal follow?up study to assess longitudinal hs?TNT changes relative to fibrosis and cardiomyopathy progression. Methods and Results For the prospective analysis, hs?TNT from 75 consecutive patients with genetically confirm...

  2. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  3. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili, E-mail: wangnsrl@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029 (China); Zhang, Kai [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Zhu, Peiping; Wu, Ziyu, E-mail: wuzy@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China and Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.

  4. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili; Zhang, Kai; Zhu, Peiping; Wu, Ziyu

    2015-01-01

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations

  5. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  6. High sensitivity pyrogen testing in water and dialysis solutions.

    Science.gov (United States)

    Daneshian, Mardas; Wendel, Albrecht; Hartung, Thomas; von Aulock, Sonja

    2008-07-20

    The dialysis patient is confronted with hundreds of litres of dialysis solution per week, which pass the natural protective barriers of the body and are brought into contact with the tissue directly in the case of peritoneal dialysis or indirectly in the case of renal dialysis (hemodialysis). The components can be tested for living specimens or dead pyrogenic (fever-inducing) contaminations. The former is usually detected by cultivation and the latter by the endotoxin-specific Limulus Amoebocyte Lysate Assay (LAL). However, the LAL assay does not reflect the response of the human immune system to the wide variety of possible pyrogenic contaminations in dialysis fluids. Furthermore, the test is limited in its sensitivity to detect extremely low concentrations of pyrogens, which in their sum result in chronic pathologies in dialysis patients. The In vitro Pyrogen Test (IPT) employs human whole blood to detect the spectrum of pyrogens to which humans respond by measuring the release of the endogenous fever mediator interleukin-1beta. Spike recovery checks exclude interference. The test has been validated in an international study for pyrogen detection in injectable solutions. In this study we adapted the IPT to the testing of dialysis solutions. Preincubation of 50 ml spiked samples with albumin-coated microspheres enhanced the sensitivity of the assay to detect contaminations down to 0.1 pg/ml LPS or 0.001 EU/ml in water or saline and allowed pyrogen detection in dialysis concentrates or final working solutions. This method offers high sensitivity detection of human-relevant pyrogens in dialysis solutions and components.

  7. Sensitivity/uncertainty analysis of a borehole scenario comparing Latin Hypercube Sampling and deterministic sensitivity approaches

    International Nuclear Information System (INIS)

    Harper, W.V.; Gupta, S.K.

    1983-10-01

    A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables

  8. Sensitivity and uncertainty analysis of NET/ITER shielding blankets

    International Nuclear Information System (INIS)

    Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.

    1990-09-01

    Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs

  9. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  10. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  11. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  12. Mass Spectrometry-based Assay for High Throughput and High Sensitivity Biomarker Verification

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Xuejiang; Tang, Keqi

    2017-06-14

    Searching for disease specific biomarkers has become a major undertaking in the biomedical research field as the effective diagnosis, prognosis and treatment of many complex human diseases are largely determined by the availability and the quality of the biomarkers. A successful biomarker as an indicator to a specific biological or pathological process is usually selected from a large group of candidates by a strict verification and validation process. To be clinically useful, the validated biomarkers must be detectable and quantifiable by the selected testing techniques in their related tissues or body fluids. Due to its easy accessibility, protein biomarkers would ideally be identified in blood plasma or serum. However, most disease related protein biomarkers in blood exist at very low concentrations (<1ng/mL) and are “masked” by many none significant species at orders of magnitude higher concentrations. The extreme requirements of measurement sensitivity, dynamic range and specificity make the method development extremely challenging. The current clinical protein biomarker measurement primarily relies on antibody based immunoassays, such as ELISA. Although the technique is sensitive and highly specific, the development of high quality protein antibody is both expensive and time consuming. The limited capability of assay multiplexing also makes the measurement an extremely low throughput one rendering it impractical when hundreds to thousands potential biomarkers need to be quantitatively measured across multiple samples. Mass spectrometry (MS)-based assays have recently shown to be a viable alternative for high throughput and quantitative candidate protein biomarker verification. Among them, the triple quadrupole MS based assay is the most promising one. When it is coupled with liquid chromatography (LC) separation and electrospray ionization (ESI) source, a triple quadrupole mass spectrometer operating in a special selected reaction monitoring (SRM) mode

  13. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Luminescent Lanthanide Reporters for High-Sensitivity Novel Bioassays

    Energy Technology Data Exchange (ETDEWEB)

    Anstey, Mitchell R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Fruetel, Julia A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Foster, Michael E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Hayden, Carl C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Buckley, Heather L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Arnold, John [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2013-09-01

    Biological imaging and assay technologies rely on fluorescent organic dyes as reporters for a number of interesting targets and processes. However, limitations of organic dyes such as small Stokes shifts, spectral overlap of emission signals with native biological fluorescence background, and photobleaching have all inhibited the development of highly sensitive assays. To overcome the limitations of organic dyes for bioassays, we propose to develop lanthanide-based luminescent dyes and demonstrate them for molecular reporting applications. This relatively new family of dyes was selected for their attractive spectral and chemical properties. Luminescence is imparted by the lanthanide atom and allows for relatively simple chemical structures that can be tailored to the application. The photophysical properties offer unique features such as narrow and non-overlapping emission bands, long luminescent lifetimes, and long wavelength emission, which enable significant sensitivity improvements over organic dyes through spectral and temporal gating of the luminescent signal.Growth in this field has been hindered due to the necessary advanced synthetic chemistry techniques and access to experts in biological assay development. Our strategy for the development of a new lanthanide-based fluorescent reporter system is based on chelation of the lanthanide metal center using absorbing chromophores. Our first strategy involves "Click" chemistry to develop 3-fold symmetric chelators and the other involves use of a new class of tetrapyrrole ligands called corroles. This two-pronged approach is geared towards the optimization of chromophores to enhance light output.

  15. New application of superconductors: High sensitivity cryogenic light detectors

    Energy Technology Data Exchange (ETDEWEB)

    Cardani, L., E-mail: laura.cardani@roma1.infn.it [Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma (Italy); Physics Department, Princeton University, Washington Road, 08544 Princeton, NJ (United States); Bellini, F.; Casali, N. [Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma (Italy); INFN – Sezione di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy (Italy); Castellano, M.G. [Istituto di Fotonica e Nanotecnologie – CNR, Via Cineto Romano 42, 00156 Roma (Italy); Colantoni, I.; Coppolecchia, A. [Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma (Italy); Cosmelli, C.; Cruciani, A. [Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma (Italy); INFN – Sezione di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy (Italy); D' Addabbo, A. [INFN – Laboratori Nazionali del Gran Sasso, Assergi (L' Aquila) 67010 (Italy); Di Domizio, S. [INFN – Sezione di Genova, Via Dodecaneso 33, 16146 Genova (Italy); Dipartimento di Fisica, Università degli Studi di Genova, Via Dodecaneso 33, 16146 Genova (Italy); Martinez, M. [Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma (Italy); INFN – Sezione di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy (Italy); Laboratorio de Fisica Nuclear y Astroparticulas, Universidad de Zaragoza, Zaragoza 50009 (Spain); Tomei, C. [INFN – Sezione di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy (Italy); and others

    2017-02-11

    In this paper we describe the current status of the CALDER project, which is developing ultra-sensitive light detectors based on superconductors for cryogenic applications. When we apply an AC current to a superconductor, the Cooper pairs oscillate and acquire kinetic inductance, that can be measured by inserting the superconductor in a LC circuit with high merit factor. Interactions in the superconductor can break the Cooper pairs, causing sizable variations in the kinetic inductance and, thus, in the response of the LC circuit. The continuous monitoring of the amplitude and frequency modulation allows to reconstruct the incident energy with excellent sensitivity. This concept is at the basis of Kinetic Inductance Detectors (KIDs) that are characterized by natural aptitude to multiplexed read-out (several sensors can be tuned to different resonant frequencies and coupled to the same line), resolution of few eV, stable behavior over a wide temperature range, and ease in fabrication. We present the results obtained by the CALDER collaboration with 2×2 cm{sup 2} substrates sampled by 1 or 4 Aluminum KIDs. We show that the performances of the first prototypes are already competitive with those of other commonly used light detectors, and we discuss the strategies for a further improvement.

  16. New application of superconductors: High sensitivity cryogenic light detectors

    International Nuclear Information System (INIS)

    Cardani, L.; Bellini, F.; Casali, N.; Castellano, M.G.; Colantoni, I.; Coppolecchia, A.; Cosmelli, C.; Cruciani, A.; D'Addabbo, A.; Di Domizio, S.; Martinez, M.; Tomei, C.

    2017-01-01

    In this paper we describe the current status of the CALDER project, which is developing ultra-sensitive light detectors based on superconductors for cryogenic applications. When we apply an AC current to a superconductor, the Cooper pairs oscillate and acquire kinetic inductance, that can be measured by inserting the superconductor in a LC circuit with high merit factor. Interactions in the superconductor can break the Cooper pairs, causing sizable variations in the kinetic inductance and, thus, in the response of the LC circuit. The continuous monitoring of the amplitude and frequency modulation allows to reconstruct the incident energy with excellent sensitivity. This concept is at the basis of Kinetic Inductance Detectors (KIDs) that are characterized by natural aptitude to multiplexed read-out (several sensors can be tuned to different resonant frequencies and coupled to the same line), resolution of few eV, stable behavior over a wide temperature range, and ease in fabrication. We present the results obtained by the CALDER collaboration with 2×2 cm"2 substrates sampled by 1 or 4 Aluminum KIDs. We show that the performances of the first prototypes are already competitive with those of other commonly used light detectors, and we discuss the strategies for a further improvement.

  17. Anisotropic analysis for seismic sensitivity of groundwater monitoring wells

    Science.gov (United States)

    Pan, Y.; Hsu, K.

    2011-12-01

    Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.

  18. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  19. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  20. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  1. High-Sensitivity AGN Polarimetry at Sub-Millimeter Wavelengths

    Directory of Open Access Journals (Sweden)

    Ivan Martí-Vidal

    2017-10-01

    Full Text Available The innermost regions of radio loud Active Galactic Nuclei (AGN jets are heavily affected by synchrotron self-absorption, due to the strong magnetic fields and high particle densities in these extreme zones. The only way to overcome this absorption is to observe at sub-millimeter wavelengths, although polarimetric observations at such frequencies have so far been limited by sensitivity and calibration accuracy. However, new generation instruments such as the Atacama Large mm/sub-mm Array (ALMA overcome these limitations and are starting to deliver revolutionary results in the observational studies of AGN polarimetry. Here we present an overview of our state-of-the-art interferometric mm/sub-mm polarization observations of AGN jets with ALMA (in particular, the gravitationally-lensed sources PKS 1830−211 and B0218+359, which allow us to probe the magneto-ionic conditions at the regions closest to the central black holes.

  2. High resolution, position sensitive detector for energetic particle beams

    International Nuclear Information System (INIS)

    Marsh, E.P.; Strathman, M.D.; Reed, D.A.; Odom, R.W.; Morse, D.H.; Pontau, A.E.

    1993-01-01

    The performance and design of an imaging position sensitive, particle beam detector will be presented. The detector is minimally invasive, operates a wide dynamic range (>10 10 ), and exhibits high spatial resolution. The secondary electrons produced when a particle beam passes through a thin foil are imaged using stigmatic ion optics onto a two-dimensional imaging detector. Due to the low scattering cross section of the 6 nm carbon foil the detector is a minimal perturbation on the primary beam. A prototype detector with an image resolution of approximately 5 μm for a field of view of 1 mm has been reported. A higher resolution detector for imaging small beams (<50 μm) with an image resolution of better than 0.5 μm has since been developed and its design is presented. (orig.)

  3. High resolution, position sensitive detector for energetic particle beams

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, E P [Charles Evans and Associates, Redwood City, CA (United States); Strathman, M D [Charles Evans and Associates, Redwood City, CA (United States); Reed, D A [Charles Evans and Associates, Redwood City, CA (United States); Odom, R W [Charles Evans and Associates, Redwood City, CA (United States); Morse, D H [Sandia National Labs., Livermore, CA (United States); Pontau, A E [Sandia National Labs., Livermore, CA (United States)

    1993-05-01

    The performance and design of an imaging position sensitive, particle beam detector will be presented. The detector is minimally invasive, operates a wide dynamic range (>10[sup 10]), and exhibits high spatial resolution. The secondary electrons produced when a particle beam passes through a thin foil are imaged using stigmatic ion optics onto a two-dimensional imaging detector. Due to the low scattering cross section of the 6 nm carbon foil the detector is a minimal perturbation on the primary beam. A prototype detector with an image resolution of approximately 5 [mu]m for a field of view of 1 mm has been reported. A higher resolution detector for imaging small beams (<50 [mu]m) with an image resolution of better than 0.5 [mu]m has since been developed and its design is presented. (orig.)

  4. Highly Sensitive Filter Paper Substrate for SERS Trace Explosives Detection

    Directory of Open Access Journals (Sweden)

    Pedro M. Fierro-Mercado

    2012-01-01

    Full Text Available We report on a novel and extremely low-cost surface-enhanced Raman spectroscopy (SERS substrate fabricated depositing gold nanoparticles on common lab filter paper using thermal inkjet technology. The paper-based substrate combines all advantages of other plasmonic structures fabricated by more elaborate techniques with the dynamic flexibility given by the inherent nature of the paper for an efficient sample collection, robustness, and stability. We describe the fabrication, characterization, and SERS activity of our substrate using 2,4,6-trinitrotoluene, 2,4-dinitrotoluene, and 1,3,5-trinitrobenzene as analytes. The paper-based SERS substrates presented a high sensitivity and excellent reproducibility for analytes employed, demonstrating a direct application in forensic science and homeland security.

  5. High efficiency solid-state sensitized heterojunction photovoltaic device

    KAUST Repository

    Wang, Mingkui

    2010-06-01

    The high molar extinction coefficient heteroleptic ruthenium dye, NaRu(4,4′-bis(5-(hexylthio)thiophen-2-yl)-2,2′-bipyridine) (4-carboxylic acid-4′-carboxylate-2,2′-bipyridine) (NCS) 2, exhibits certified 5% electric power conversion efficiency at AM 1.5 solar irradiation (100 mW cm-2) in a solid-state dye-sensitized solar cell using 2,2′,7,7′-tetrakis-(N,N-di-pmethoxyphenylamine)-9, 9′-spirobifluorene (spiro-MeOTAD) as the organic hole-transporting material. This demonstration elucidates a class of photovoltaic devices with potential for low-cost power generation. © 2010 Elsevier Ltd. All rights reserved.

  6. High efficiency solid-state sensitized heterojunction photovoltaic device

    KAUST Repository

    Wang, Mingkui; Liu, Jingyuan; Cevey-Ha, Ngoc-Le; Moon, Soo-Jin; Liska, Paul; Humphry-Baker, Robin; Moser, Jacques-E.; Grä tzel, Carole; Wang, Peng; Zakeeruddin, Shaik M.

    2010-01-01

    The high molar extinction coefficient heteroleptic ruthenium dye, NaRu(4,4′-bis(5-(hexylthio)thiophen-2-yl)-2,2′-bipyridine) (4-carboxylic acid-4′-carboxylate-2,2′-bipyridine) (NCS) 2, exhibits certified 5% electric power conversion efficiency at AM 1.5 solar irradiation (100 mW cm-2) in a solid-state dye-sensitized solar cell using 2,2′,7,7′-tetrakis-(N,N-di-pmethoxyphenylamine)-9, 9′-spirobifluorene (spiro-MeOTAD) as the organic hole-transporting material. This demonstration elucidates a class of photovoltaic devices with potential for low-cost power generation. © 2010 Elsevier Ltd. All rights reserved.

  7. Effective dose calculation in CT using high sensitivity TLDs

    International Nuclear Information System (INIS)

    Brady, Z.; Johnston, P.N.

    2010-01-01

    Full text: To determine the effective dose for common paediatric CT examinations using thermoluminescence dosimetry (TLD) mea surements. High sensitivity TLD chips (LiF:Mg,Cu,P, TLD-IOOH, Thermo Fisher Scientific, Waltham, MA) were calibrated on a linac at an energy of 6 MY. A calibration was also performed on a superricial X-ray unit at a kilovoltage energy to validate the megavoltage cali bration for the purpose of measuring doses in the diagnostic energy range. The dose variation across large organs was assessed and a methodology for TLD placement in a 10 year old anthropomorphic phantom developed. Effective dose was calculated from the TLD measured absorbed doses for typical CT examinations after correcting for the TLD energy response and taking into account differences in the mass energy absorption coefficients for different tissues and organs. Results Using new tissue weighting factors recommended in ICRP Publication 103, the effective dose for a CT brain examination on a 10 year old was 1.6 millisieverts (mSv), 4.9 mSv for a CT chest exa ination and 4.7 mSv for a CT abdomen/pelvis examination. These values are lower for the CT brain examination, higher for the CT chest examination and approximately the same for the CT abdomen/ pelvis examination when compared with effective doses calculated using ICRP Publication 60 tissue weighting factors. Conclusions High sensitivity TLDs calibrated with a radiotherapy linac are useful for measuring dose in the diagnostic energy range and overcome limitations of output reproducibility and uniformity asso ciated with traditional TLD calibration on CT scanners or beam quality matched diagnostic X-ray units.

  8. Sorption of redox-sensitive elements: critical analysis

    International Nuclear Information System (INIS)

    Strickert, R.G.

    1980-12-01

    The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH - , CO -- 3 ) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states

  9. Sorption of redox-sensitive elements: critical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strickert, R.G.

    1980-12-01

    The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.

  10. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  11. Development of the "Highly Sensitive Dog" questionnaire to evaluate the personality dimension "Sensory Processing Sensitivity" in dogs.

    Directory of Open Access Journals (Sweden)

    Maya Braem

    Full Text Available In humans, the personality dimension 'sensory processing sensitivity (SPS', also referred to as "high sensitivity", involves deeper processing of sensory information, which can be associated with physiological and behavioral overarousal. However, it has not been studied up to now whether this dimension also exists in other species. SPS can influence how people perceive the environment and how this affects them, thus a similar dimension in animals would be highly relevant with respect to animal welfare. We therefore explored whether SPS translates to dogs, one of the primary model species in personality research. A 32-item questionnaire to assess the "highly sensitive dog score" (HSD-s was developed based on the "highly sensitive person" (HSP questionnaire. A large-scale, international online survey was conducted, including the HSD questionnaire, as well as questions on fearfulness, neuroticism, "demographic" (e.g. dog sex, age, weight; age at adoption, etc. and "human" factors (e.g. owner age, sex, profession, communication style, etc., and the HSP questionnaire. Data were analyzed using linear mixed effect models with forward stepwise selection to test prediction of HSD-s by the above-mentioned factors, with country of residence and dog breed treated as random effects. A total of 3647 questionnaires were fully completed. HSD-, fearfulness, neuroticism and HSP-scores showed good internal consistencies, and HSD-s only moderately correlated with fearfulness and neuroticism scores, paralleling previous findings in humans. Intra- (N = 447 and inter-rater (N = 120 reliabilities were good. Demographic and human factors, including HSP score, explained only a small amount of the variance of HSD-s. A PCA analysis identified three subtraits of SPS, comparable to human findings. Overall, the measured personality dimension in dogs showed good internal consistency, partial independence from fearfulness and neuroticism, and good intra- and inter

  12. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  13. High-throughput, Highly Sensitive Analyses of Bacterial Morphogenesis Using Ultra Performance Liquid Chromatography*

    Science.gov (United States)

    Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey

    2015-01-01

    The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288

  14. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  15. High-sensitivity C-reactive protein does not improve the differential diagnosis of HNF1A-MODY and familial young-onset type 2 diabetes: A grey zone analysis.

    Science.gov (United States)

    Bellanné-Chantelot, C; Coste, J; Ciangura, C; Fonfrède, M; Saint-Martin, C; Bouché, C; Sonnet, E; Valéro, R; Lévy, D-J; Dubois-Laforgue, D; Timsit, J

    2016-02-01

    Low plasma levels of high-sensitivity C-reactive protein (hs-CRP) have been suggested to differentiate hepatocyte nuclear factor 1 alpha-maturity-onset diabetes of the young (HNF1A-MODY) from type 2 diabetes (T2D). Yet, differential diagnosis of HNF1A-MODY and familial young-onset type 2 diabetes (F-YT2D) remains a difficult challenge. Thus, this study assessed the added value of hs-CRP to distinguish between the two conditions. This prospective multicentre study included 143 HNF1A-MODY patients, 310 patients with a clinical history suggestive of HNF1A-MODY, but not confirmed genetically (F-YT2D), and 215 patients with T2D. The ability of models, including clinical characteristics and hs-CRP to predict HNF1A-MODY was analyzed, using the area of the receiver operating characteristic (AUROC) curve, and a grey zone approach was used to evaluate these models in clinical practice. Median hs-CRP values were lower in HNF1A-MODY (0.25mg/L) than in F-YT2D (1.14mg/L) and T2D (1.70mg/L) patients. Clinical parameters were sufficient to differentiate HNF1A-MODY from classical T2D (AUROC: 0.99). AUROC analyses to distinguish HNF1A-MODY from F-YT2D were 0.82 for clinical features and 0.87 after including hs-CRP. For the grey zone analysis, the lower boundary was set to missMODY with F-YT2D, 65% of patients were classified in between these categories - in the zone of diagnostic uncertainty - even after adding hs-CRP to clinical parameters. hs-CRP does not improve the differential diagnosis of HNF1A-MODY and F-YT2D. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. Best estimate analysis of LOFT L2-5 with CATHARE: uncertainty and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    JOUCLA, Jerome; PROBST, Pierre [Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); FOUET, Fabrice [APTUS, Versailles (France)

    2008-07-01

    The revision of the 10 CFR50.46 in 1988 has made possible the use of best-estimate codes. They may be used in safety demonstration and licensing, provided that uncertainties are added to the relevant output parameters before comparing them with the acceptance criteria. In the safety analysis of the large break loss of coolant accident, it was agreed that the 95. percentile estimated with a high degree of confidence should be lower than the acceptance criteria. It appeared necessary to IRSN, technical support of the French Safety Authority, to get more insight into these strategies which are being developed not only in thermal-hydraulics but in other fields such as in neutronics. To estimate the 95. percentile with a high confidence level, we propose to use rank statistics or bootstrap. Toward the objective of assessing uncertainty, it is useful to determine and to classify the main input parameters. We suggest approximating the code by a surrogate model, the Kriging model, which will be used to make a sensitivity analysis with the SOBOL methodology. This paper presents the application of two new methodologies of how to make the uncertainty and sensitivity analysis on the maximum peak cladding temperature of the LOFT L2-5 test with the CATHARE code. (authors)

  17. Laser-engraved carbon nanotube paper for instilling high sensitivity, high stretchability, and high linearity in strain sensors

    KAUST Repository

    Xin, Yangyang

    2017-06-29

    There is an increasing demand for strain sensors with high sensitivity and high stretchability for new applications such as robotics or wearable electronics. However, for the available technologies, the sensitivity of the sensors varies widely. These sensors are also highly nonlinear, making reliable measurement challenging. Here we introduce a new family of sensors composed of a laser-engraved carbon nanotube paper embedded in an elastomer. A roll-to-roll pressing of these sensors activates a pre-defined fragmentation process, which results in a well-controlled, fragmented microstructure. Such sensors are reproducible and durable and can attain ultrahigh sensitivity and high stretchability (with a gauge factor of over 4.2 × 10(4) at 150% strain). Moreover, they can attain high linearity from 0% to 15% and from 22% to 150% strain. They are good candidates for stretchable electronic applications that require high sensitivity and linearity at large strains.

  18. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    Energy Technology Data Exchange (ETDEWEB)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  19. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    International Nuclear Information System (INIS)

    Buzzard, Gregery T.

    2012-01-01

    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  20. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  1. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  2. Global sensitivity analysis of multiscale properties of porous materials

    Science.gov (United States)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  3. Sensitivity analysis overlaps of friction elements in cartridge seals

    Directory of Open Access Journals (Sweden)

    Žmindák Milan

    2018-01-01

    Full Text Available Cartridge seals are self-contained units consisting of a shaft sleeve, seals, and gland plate. The applications of mechanical seals are numerous. The most common example of application is in bearing production for automobile industry. This paper deals with the sensitivity analysis of overlaps friction elements in cartridge seal and their influence on the friction torque sealing and compressive force. Furthermore, it describes materials for the manufacture of sealings, approaches usually used to solution of hyperelastic materials by FEM and short introduction into the topic wheel bearings. The practical part contains one of the approach for measurement friction torque, which results were used to specifying the methodology and precision of FEM calculation realized by software ANSYS WORKBENCH. This part also contains the sensitivity analysis of overlaps friction elements.

  4. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  5. Influence analysis to assess sensitivity of the dropout process

    OpenAIRE

    Molenberghs, Geert; Verbeke, Geert; Thijs, Herbert; Lesaffre, Emmanuel; Kenward, Michael

    2001-01-01

    Diggle and Kenward (Appl. Statist. 43 (1994) 49) proposed a selection model for continuous longitudinal data subject to possible non-random dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumption upon which this type of models invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. One of their examples is a set of da...

  6. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    Science.gov (United States)

    2015-04-01

    determined. From the results of the study, UN is safe to store under normal operating conditions. 15. SUBJECT TERMS urea, nitrate , sensitivity, thermal ...HNO3). Due to its simple composition, ease of manufacture, and higher detonation parameters than ammonium nitrate , it has become one of the...an H50 value of 10.054 ± 0.620 inches. 5. Conclusions From the results of the thermal analysis study, it can be concluded that urea nitrate is

  7. Applications of the TSUNAMI sensitivity and uncertainty analysis methodology

    International Nuclear Information System (INIS)

    Rearden, Bradley T.; Hopper, Calvin M.; Elam, Karla R.; Goluoglu, Sedat; Parks, Cecil V.

    2003-01-01

    The TSUNAMI sensitivity and uncertainty analysis tools under development for the SCALE code system have recently been applied in four criticality safety studies. TSUNAMI is used to identify applicable benchmark experiments for criticality code validation, assist in the design of new critical experiments for a particular need, reevaluate previously computed computational biases, and assess the validation coverage and propose a penalty for noncoverage for a specific application. (author)

  8. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  9. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  10. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  11. B1 -sensitivity analysis of quantitative magnetization transfer imaging.

    Science.gov (United States)

    Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce

    2018-01-01

    To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    Science.gov (United States)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  13. SENSITIVITY ANALYSIS OF BIOME-BGC MODEL FOR DRY TROPICAL FORESTS OF VINDHYAN HIGHLANDS, INDIA

    OpenAIRE

    M. Kumar; A. S. Raghubanshi

    2012-01-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to...

  14. Probability and sensitivity analysis of machine foundation and soil interaction

    Directory of Open Access Journals (Sweden)

    Králik J., jr.

    2009-06-01

    Full Text Available This paper deals with the possibility of the sensitivity and probabilistic analysis of the reliability of the machine foundation depending on variability of the soil stiffness, structure geometry and compressor operation. The requirements to design of the foundation under rotating machines increased due to development of calculation method and computer tools. During the structural design process, an engineer has to consider problems of the soil-foundation and foundation-machine interaction from the safety, reliability and durability of structure point of view. The advantages and disadvantages of the deterministic and probabilistic analysis of the machine foundation resistance are discussed. The sensitivity of the machine foundation to the uncertainties of the soil properties due to longtime rotating movement of machine is not negligible for design engineers. On the example of compressor foundation and turbine fy. SIEMENS AG the affectivity of the probabilistic design methodology was presented. The Latin Hypercube Sampling (LHS simulation method for the analysis of the compressor foundation reliability was used on program ANSYS. The 200 simulations for five load cases were calculated in the real time on PC. The probabilistic analysis gives us more complex information about the soil-foundation-machine interaction as the deterministic analysis.

  15. Highly sensitive and multiplexed platforms for allergy diagnostics

    Science.gov (United States)

    Monroe, Margo R.

    Allergy is a disorder of the immune system caused by an immune response to otherwise harmless environmental allergens. Currently 20% of the US population is allergic and 90% of pediatric patients and 60% of adult patients with asthma have allergies. These percentages have increased by 18.5% in the past decade, with predicted similar trends for the future. Here we design sensitive, multiplexed platforms to detect allergen-specific IgE using the Interferometric Reflectance Imaging Sensor (IRIS) for various clinical settings. A microarray platform for allergy diagnosis allows for testing of specific IgE sensitivity to a multitude of allergens, while requiring only small volumes of patient blood sample. However, conventional fluorescent microarray technology is limited by i) the variation of probe immobilization, which hinders the ability to make quantitative, assertive, and statistically relevant conclusions necessary in immunodiagnostics and ii) the use of fluorophore labels, which is not suitable for some clinical applications due to the tendency of fluorophores to stick to blood particulates and require daily calibration methods. This calibrated fluorescence enhancement (CaFE) method integrates the low magnification modality of IRIS with enhanced fluorescence sensing in order to directly correlate immobilized probe (major allergens) density to allergen-specific IgE in patient serum. However, this platform only operates in processed serum samples, which is not ideal for point of care testing. Thus, a high magnification modality of IRIS was adapted as an alternative allergy diagnostic platform to automatically discriminate and size single nanoparticles bound to specific IgE in unprocessed, characterized human blood and serum samples. These features make IRIS an ideal candidate for clinical and diagnostic applications, such a POC testing. The high magnification (nanoparticle counting) modality in conjunction with low magnification of IRIS in a combined instrument

  16. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  17. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  18. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  19. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  20. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  1. Sensitivity analysis: Interaction of DOE SNF and packaging materials

    International Nuclear Information System (INIS)

    Anderson, P.A.; Kirkham, R.J.; Shaber, E.L.

    1999-01-01

    A sensitivity analysis was conducted to evaluate the technical issues pertaining to possible destructive interactions between spent nuclear fuels (SNFs) and the stainless steel canisters. When issues are identified through such an analysis, they provide the technical basis for answering what if questions and, if needed, for conducting additional analyses, testing, or other efforts to resolve them in order to base the licensing on solid technical grounds. The analysis reported herein systematically assessed the chemical and physical properties and the potential interactions of the materials that comprise typical US Department of Energy (DOE) SNFs and the stainless steel canisters in which they will be stored, transported, and placed in a geologic repository for final disposition. The primary focus in each step of the analysis was to identify any possible phenomena that could potentially compromise the structural integrity of the canisters and to assess their thermodynamic feasibility

  2. Sensitivity analysis: Theory and practical application in safety cases

    International Nuclear Information System (INIS)

    Kuhlmann, Sebastian; Plischke, Elmar; Roehlig, Klaus-Juergen; Becker, Dirk-Alexander

    2014-01-01

    The projects described here aim at deriving an adaptive and stepwise approach to sensitivity analysis (SA). Since the appropriateness of a single SA method strongly depends on the nature of the model under study, a top-down approach (from simple to sophisticated methods) is suggested. If simple methods explain the model behaviour sufficiently well then there is no need for applying more sophisticated ones and the SA procedure can be considered complete. The procedure is developed and tested using a model for a LLW/ILW repository in salt. Additionally, a new model for the disposal of HLW in rock salt will be available soon for SA studies within the MOSEL/NUMSA projects. This model will address special characteristics of waste disposal in undisturbed rock salt, especially the case of total confinement, resulting in a zero release which is indeed the objective of radioactive waste disposal. A high proportion of zero-output realisations causes many SA methods to fail, so special treatment is needed and has to be developed. Furthermore, the HLW disposal model will be used as a first test case for applying the procedure described above, which was and is being derived using the LLW/ILW model. How to treat dependencies in the input, model conservatism and time-dependent outputs will be addressed in the future project programme: - If correlations or, more generally, dependencies between input parameters exist, the question arises about the deeper meaning of sensitivity results in such cases: A strict separation between inputs, internal states and outputs is no longer possible. Such correlations (or dependencies) might have different reasons. In some cases correlated input parameters might have a common physically (well-)known fundamental cause but there are reasons why this fundamental cause cannot or should not be integrated into the model, i.e. the cause might generate a very complex model which cannot be calculated in appropriate time. In other cases the correlation may

  3. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  4. Highly sensitive multianalyte immunochromatographic test strip for rapid chemiluminescent detection of ractopamine and salbutamol

    International Nuclear Information System (INIS)

    Gao, Hongfei; Han, Jing; Yang, Shijia; Wang, Zhenxing; Wang, Lin; Fu, Zhifeng

    2014-01-01

    Graphical abstract: A multianalyte immunochromatographic test strip was developed for the rapid detection of two β 2 -agonists. Due to the application of chemiluminescent detection, this quantitative method shows much higher sensitivity. - Highlights: • An immunochromatographic test strip was developed for detection of multiple β 2 -agonists. • The whole assay process can be completed within 20 min. • The proposed method shows much higher sensitivity due to the application of CL detection. • It is a portable analytical tool suitable for field analysis and rapid screening. - Abstract: A novel immunochromatographic assay (ICA) was proposed for rapid and multiple assay of β 2 -agonists, by utilizing ractopamine (RAC) and salbutamol (SAL) as the models. Owing to the introduction of chemiluminescent (CL) approach, the proposed protocol shows much higher sensitivity. In this work, the described ICA was based on a competitive format, and horseradish peroxidase-tagged antibodies were used as highly sensitive CL probes. Quantitative analysis of β 2 -agonists was achieved by recording the CL signals of the probes captured on the two test zones of the nitrocellulose membrane. Under the optimum conditions, RAC and SAL could be detected within the linear ranges of 0.50–40 and 0.10–50 ng mL −1 , with the detection limits of 0.20 and 0.040 ng mL −1 (S/N = 3), respectively. The whole process for multianalyte immunoassay of RAC and SAL can be completed within 20 min. Furthermore, the test strip was validated with spiked swine urine samples and the results showed that this method was reliable in measuring β 2 -agonists in swine urine. This CL-based multianalyte test strip shows a series of advantages such as high sensitivity, ideal selectivity, simple manipulation, high assay efficiency and low cost. Thus, it opens up new pathway for rapid screening and field analysis, and shows a promising prospect in food safety

  5. Polypyrrole–gold nanoparticle composites for highly sensitive DNA detection

    International Nuclear Information System (INIS)

    Spain, Elaine; Keyes, Tia E.; Forster, Robert J.

    2013-01-01

    DNA capture surfaces represent a powerful approach to developing highly sensitive sensors for identifying the cause of infection. Electrochemically deposited polypyrrole, PPy, films have been functionalized with electrodeposited gold nanoparticles to give a nanocomposite material, PPy–AuNP. Thiolated capture strand DNA, that is complementary to the sequence from the pathogen Staphylococcus aureus that causes mammary gland inflammation, was then immobilized onto the gold nanoparticles and any of the underlying gold electrode that is exposed. A probe strand, labelled with horse radish peroxidase, HRP, was then hybridized to the target. The concentration of the target was determined by measuring the current generated by reducing benzoquinone produced by the HRP label. Semi-log plots of the pathogen DNA concentration vs. faradaic current are linear from 150 pM to 1 μM and pM concentrations can be detected without the need for molecular, e.g., PCR or NASBA, amplification. The nanocomposite also exhibits excellent selectivity and single base mismatches in a 30 mer sequence can be detected

  6. Improvement of sensitivity in high-resolution Rutherford backscattering spectroscopy

    International Nuclear Information System (INIS)

    Hashimoto, H.; Nakajima, K.; Suzuki, M.; Kimura, K.; Sasakawa, K.

    2011-01-01

    The sensitivity (limit of detection) of high-resolution Rutherford backscattering spectroscopy (HRBS) is mainly determined by the background noise of the spectrometer. There are two major origins of the background noise in HRBS, one is the stray ions scattered from the inner wall of the vacuum chamber of the spectrometer and the other is the dark noise of the microchannel plate (MCP) detector which is commonly used as a focal plane detector of the spectrometer in HRBS. In order to reject the stray ions, several barriers are installed inside the spectrometer and a thin Mylar foil is mounted in front of the detector. The dark noise of the MCP detector is rejected by the coincidence measurement with the secondary electrons emitted from the Mylar foil upon the ion passage. After these improvements, the background noise is reduced by a factor of 200 at a maximum. The detection limit can be improved down to 10 ppm for As in Si at a measurement time of 1 h under ideal conditions.

  7. Highly sensitive MoS2 photodetectors with graphene contacts

    Science.gov (United States)

    Han, Peize; St. Marie, Luke; Wang, Qing X.; Quirk, Nicholas; El Fatimy, Abdel; Ishigami, Masahiro; Barbara, Paola

    2018-05-01

    Two-dimensional materials such as graphene and transition metal dichalcogenides (TMDs) are ideal candidates to create ultra-thin electronics suitable for flexible substrates. Although optoelectronic devices based on TMDs have demonstrated remarkable performance, scalability is still a significant issue. Most devices are created using techniques that are not suitable for mass production, such as mechanical exfoliation of monolayer flakes and patterning by electron-beam lithography. Here we show that large-area MoS2 grown by chemical vapor deposition and patterned by photolithography yields highly sensitive photodetectors, with record shot-noise-limited detectivities of 8.7 × 1014 Jones in ambient condition and even higher when sealed with a protective layer. These detectivity values are higher than the highest values reported for photodetectors based on exfoliated MoS2. We study MoS2 devices with gold electrodes and graphene electrodes. The devices with graphene electrodes have a tunable band alignment and are especially attractive for scalable ultra-thin flexible optoelectronics.

  8. Ultra-high sensitivity imaging of cancer using SERRS nanoparticles

    Science.gov (United States)

    Kircher, Moritz F.

    2016-05-01

    "Surface-enhanced Raman spectroscopy" (SERS) nanoparticles have gained much attention in recent years for in silico, in vitro and in vivo sensing applications. Our group has developed novel generations of biocompatible "surfaceenhanced resonance Raman spectroscopy" (SERRS) nanoparticles as novel molecular imaging agents. Via rigorous optimization of the different variables contributing to the Raman enhancement, we were able to design SERRS nanoparticles with so far unprecedented sensitivity of detection under in vivo imaging conditions (femto-attomolar range). This has resulted in our ability to visualize, with a single nanoparticle, many different cancer types (after intravenous injection) in mouse models. The cancer types we have tested so far include brain, breast, esophagus, stomach, pancreas, colon, sarcoma, and prostate cancer. All mouse models used are state-of-the-art and closely mimic the tumor biology in their human counterparts. In these animals, we were able to visualize not only the bulk tumors, but importantly also microscopic extensions and locoregional satellite metastases, thus delineating for the first time the true extent of tumor spread. Moreover, the particles enable the detection of premalignant lesions. Given their inert composition they are expected to have a high chance for clinical translation, where we envision them to have an impact in various scenarios ranging from early detection, image-guidance in open or minimally invasive surgical procedures, to noninvasive imaging in conjunction with spatially offset (SESORS) Raman detection devices.

  9. Characterization of three high efficiency and blue sensitive silicon photomultipliers

    Energy Technology Data Exchange (ETDEWEB)

    Otte, Adam Nepomuk, E-mail: otte@gatech.edu; Garcia, Distefano; Nguyen, Thanh; Purushotham, Dhruv

    2017-02-21

    We report about the optical and electrical characterization of three high efficiency and blue sensitive Silicon photomultipliers from FBK, Hamamatsu, and SensL. Key features of the tested devices when operated at 90% breakdown probability are peak photon detection efficiencies between 40% and 55%, temperature dependencies of gain and PDE that are less than 1%/°C, dark rates of ∼50 kHz/mm{sup 2} at room temperature, afterpulsing of about 2%, and direct optical crosstalk between 6% and 20%. The characteristics of all three devices impressively demonstrate how the Silicon-photomultiplier technology has improved over the past ten years. It is further demonstrated how the voltage and temperature characteristics of a number of quantities can be parameterized on the basis of physical models. The models provide a deeper understanding of the device characteristics over a wide bias and temperature range. They also serve as examples how producers could provide the characteristics of their SiPMs to users. A standardized parameterization of SiPMs would enable users to find the optimal SiPM for their application and the operating point of SiPMs without having to perform measurements thus significantly reducing design and development cycles.

  10. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  11. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  12. Regional and parametric sensitivity analysis of Sobol' indices

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2015-01-01

    Nowadays, utilizing the Monte Carlo estimators for variance-based sensitivity analysis has gained sufficient popularity in many research fields. These estimators are usually based on n+2 sample matrices well designed for computing both the main and total effect indices, where n is the input dimension. The aim of this paper is to use such n+2 sample matrices to investigate how the main and total effect indices change when the uncertainty of the model inputs are reduced. For this purpose, the regional main and total effect functions are defined for measuring the changes on the main and total effect indices when the distribution range of one input is reduced, and the parametric main and total effect functions are introduced to quantify the residual main and total effect indices due to the reduced variance of one input. Monte Carlo estimators are derived for all the developed sensitivity concepts based on the n+2 samples matrices originally used for computing the main and total effect indices, thus no extra computational cost is introduced. The Ishigami function, a nonlinear model and a planar ten-bar structure are utilized for illustrating the developed sensitivity concepts, and for demonstrating the efficiency and accuracy of the derived Monte Carlo estimators. - Highlights: • The regional main and total effect functions are developed. • The parametric main and total effect functions are introduced. • The proposed sensitivity functions are all generalizations of Sobol' indices. • The Monte Carlo estimators are derived for the four sensitivity functions. • The computational cost of the estimators is the same as that of Sobol' indices

  13. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  14. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  15. An easily implemented static condensation method for structural sensitivity analysis

    Science.gov (United States)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  16. Nuclear data sensitivity/uncertainty analysis for XT-ADS

    International Nuclear Information System (INIS)

    Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert

    2011-01-01

    Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.

  17. Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior

    Energy Technology Data Exchange (ETDEWEB)

    Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.

  18. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty

  19. Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2015-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...

  20. Sensitivity Analysis of Transonic Flow over J-78 Wings

    Directory of Open Access Journals (Sweden)

    Alexander Kuzmin

    2015-01-01

    Full Text Available 3D transonic flow over swept and unswept wings with an J-78 airfoil at spanwise sections is studied numerically at negative and vanishing angles of attack. Solutions of the unsteady Reynolds-averaged Navier-Stokes equations are obtained with a finite-volume solver on unstructured meshes. The numerical simulation shows that adverse Mach numbers, at which the lift coefficient is highly sensitive to small perturbations, are larger than those obtained earlier for 2D flow. Due to the larger Mach numbers, there is an onset of self-exciting oscillations of shock waves on the wings. The swept wing exhibits a higher sensitivity to variations of the Mach number than the unswept one.

  1. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  2. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  3. High-sensitivity Cardiac Troponin Elevation after Electroconvulsive Therapy (ECT)

    Science.gov (United States)

    Duma, Andreas; Pal, Swatilika; Johnston, Joshua; Helwani, Mohammad A.; Bhat, Adithya; Gill, Bali; Rosenkvist, Jessica; Cartmill, Christopher; Brown, Frank; Miller, J. Philip; Scott, Mitchell G; Sanchez-Conde, Francisco; Jarvis, Michael; Farber, Nuri B.; Zorumski, Charles F.; Conway, Charles; Nagele, Peter

    2017-01-01

    Background While electroconvulsive therapy (ECT) is widely regarded as a life-saving and safe procedure, evidence regarding its effects on myocardial cell injury are sparse. The objective of this investigation was to determine incidence and magnitude of new cardiac troponin elevation after ECT using a novel high-sensitivity cardiac troponin I (hscTnI) assay. Methods This was a prospective cohort study in adult patients undergoing ECT in a single academic center (up to three ECT treatments per patient). The primary outcome was new hscTnI elevation after ECT, defined as an increase of hscTnI >100% after ECT compared to baseline with at least one value above the limit of quantification (10 ng/L). 12-lead ECG and hscTnI values were obtained prior to and 15–30 minutes after ECT; in a subset of patients an additional 2-hour hscTnI value was obtained. Results The final study population was 100 patients and a total of 245 ECT treatment sessions. Eight patients (8/100, 8%) experienced new hscTnI elevation after ECT with a cumulative incidence of 3.7% (9/245 treatments; one patient had two hscTnI elevations), two of whom had a non-ST-elevation myocardial infarction (incidence 2/245, 0.8%). Median hscTnI concentrations did not increase significantly after ECT. Tachycardia and/or elevated systolic blood pressure developed after approximately two thirds of ECT treatments. Conclusions ECT appears safe from a cardiac standpoint in a large majority of patients. A small subset of patients with pre-existing cardiovascular risk factors, however, may develop new cardiac troponin elevation after ECT, the clinical relevance of which is unclear in the absence of signs of myocardial ischemia. PMID:28166110

  4. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  5. Sensitivity analysis and design optimization through automatic differentiation

    International Nuclear Information System (INIS)

    Hovland, Paul D; Norris, Boyana; Strout, Michelle Mills; Bhowmick, Sanjukta; Utke, Jean

    2005-01-01

    Automatic differentiation is a technique for transforming a program or subprogram that computes a function, including arbitrarily complex simulation codes, into one that computes the derivatives of that function. We describe the implementation and application of automatic differentiation tools. We highlight recent advances in the combinatorial algorithms and compiler technology that underlie successful implementation of automatic differentiation tools. We discuss applications of automatic differentiation in design optimization and sensitivity analysis. We also describe ongoing research in the design of language-independent source transformation infrastructures for automatic differentiation algorithms

  6. Chemically Designed Metallic/Insulating Hybrid Nanostructures with Silver Nanocrystals for Highly Sensitive Wearable Pressure Sensors.

    Science.gov (United States)

    Kim, Haneun; Lee, Seung-Wook; Joh, Hyungmok; Seong, Mingi; Lee, Woo Seok; Kang, Min Su; Pyo, Jun Beom; Oh, Soong Ju

    2018-01-10

    With the increase in interest in wearable tactile pressure sensors for e-skin, researches to make nanostructures to achieve high sensitivity have been actively conducted. However, limitations such as complex fabrication processes using expensive equipment still exist. Herein, simple lithography-free techniques to develop pyramid-like metal/insulator hybrid nanostructures utilizing nanocrystals (NCs) are demonstrated. Ligand-exchanged and unexchanged silver NC thin films are used as metallic and insulating components, respectively. The interfaces of each NC layer are chemically engineered to create discontinuous insulating layers, i.e., spacers for improved sensitivity, and eventually to realize fully solution-processed pressure sensors. Device performance analysis with structural, chemical, and electronic characterization and conductive atomic force microscopy study reveals that hybrid nanostructure based pressure sensor shows an enhanced sensitivity of higher than 500 kPa -1 , reliability, and low power consumption with a wide range of pressure sensing. Nano-/micro-hierarchical structures are also designed by combining hybrid nanostructures with conventional microstructures, exhibiting further enhanced sensing range and achieving a record sensitivity of 2.72 × 10 4 kPa -1 . Finally, all-solution-processed pressure sensor arrays with high pixel density, capable of detecting delicate signals with high spatial selectivity much better than the human tactile threshold, are introduced.

  7. Palladium Gate All Around - Hetero Dielectric -Tunnel FET based highly sensitive Hydrogen Gas Sensor

    Science.gov (United States)

    Madan, Jaya; Chaujar, Rishu

    2016-12-01

    The paper presents a novel highly sensitive Hetero-Dielectric-Gate All Around Tunneling FET (HD-GAA-TFET) based Hydrogen Gas Sensor, incorporating the advantages of band to band tunneling (BTBT) mechanism. Here, the Palladium supported silicon dioxide is used as a sensing media and sensing relies on the interaction of hydrogen with Palladium-SiO2-Si. The high surface to volume ratio in the case of cylindrical GAA structure enhances the fortuities for surface reactions between H2 gas and Pd, and thus improves the sensitivity and stability of the sensor. Behaviour of the sensor in presence of hydrogen and at elevated temperatures is discussed. The conduction path of the sensor which is dependent on sensors radius has also been varied for the optimized sensitivity and static performance analysis of the sensor where the proposed design exhibits a superior performance in terms of threshold voltage, subthreshold swing, and band to band tunneling rate. Stability of the sensor with respect to temperature affectability has also been studied, and it is found that the device is reasonably stable and highly sensitive over the bearable temperature range. The successful utilization of HD-GAA-TFET in gas sensors may open a new door for the development of novel nanostructure gas sensing devices.

  8. Sensitivity analysis for the effects of multiple unmeasured confounders.

    Science.gov (United States)

    Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate

    2016-09-01

    Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    Science.gov (United States)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  10. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  11. Applying cost-sensitive classification for financial fraud detection under high class-imbalance

    CSIR Research Space (South Africa)

    Moepya, SO

    2014-12-01

    Full Text Available , sensitivity, specificity, recall and precision using PCA and Factor Analysis. Weighted Support Vector Machines (SVM) were shown superior to the cost-sensitive Naive Bayes (NB) and K-Nearest Neighbors classifiers....

  12. A survey of cross-section sensitivity analysis as applied to radiation shielding

    International Nuclear Information System (INIS)

    Goldstein, H.

    1977-01-01

    Cross section sensitivity studies revolve around finding the change in the value of an integral quantity, e.g. transmitted dose, for a given change in one of the cross sections. A review is given of the principal methodologies for obtaining the sensitivity profiles-principally direct calculations with altered cross sections, and linear perturbation theory. Some of the varied applications of cross section sensitivity analysis are described, including the practice, of questionable value, of adjusting input cross section data sets so as to provide agreement with integral experiments. Finally, a plea is made for using cross section sensitivity analysis as a powerful tool for analysing the transport mechanisms of particles in radiation shields and for constructing models of how cross section phenomena affect the transport. Cross section sensitivities in the shielding area have proved to be highly problem-dependent. Without the understanding afforded by such models, it is impossible to extrapolate the conclusions of cross section sensitivity analysis beyond the narrow limits of the specific situations examined in detail. Some of the elements that might be of use in developing the qualitative models are presented. (orig.) [de

  13. High-intensity xenon plasma discharge lamp for bulk-sensitive high-resolution photoemission spectroscopy.

    Science.gov (United States)

    Souma, S; Sato, T; Takahashi, T; Baltzer, P

    2007-12-01

    We have developed a highly brilliant xenon (Xe) discharge lamp operated by microwave-induced electron cyclotron resonance (ECR) for ultrahigh-resolution bulk-sensitive photoemission spectroscopy (PES). We observed at least eight strong radiation lines from neutral or singly ionized Xe atoms in the energy region of 8.4-10.7 eV. The photon flux of the strongest Xe I resonance line at 8.437 eV is comparable to that of the He Ialpha line (21.218 eV) from the He-ECR discharge lamp. Stable operation for more than 300 h is achieved by efficient air-cooling of a ceramic tube in the resonance cavity. The high bulk sensitivity and high-energy resolution of PES using the Xe lines are demonstrated for some typical materials.

  14. Highly sensitive electrochemical determination of 1-naphthol based on high-index facet SnO2 modified electrode

    International Nuclear Information System (INIS)

    Huang Xiaofeng; Zhao Guohua; Liu Meichuan; Li Fengting; Qiao Junlian; Zhao Sichen

    2012-01-01

    Highlights: ► It is the first time to employ high-index faceted SnO 2 in electrochemical analysis. ► High-index faceted SnO 2 has excellent electrochemical activity toward 1-naphthol. ► Highly sensitive determination of 1-naphthol is realized on high-index faceted SnO 2 . ► The detection limit of 1-naphthol is as low as 5 nM on high-index faceted SnO 2 . ► Electro-oxidation kinetics for 1-napthol on the novel electrode is discussed. - Abstract: SnO 2 nanooctahedron with {2 2 1} high-index facet (HIF) was synthesized by a simple hydrothermal method, and was firstly employed to sensitive electrochemical sensing of a typical organic pollutant, 1-naphthol (1-NAP). The constructed HIF SnO 2 modified glassy carbon electrode (HIF SnO 2 /GCE) possessed advantages of large effective electrode area, high electron transfer rate, and low charge transfer resistance. These improved electrochemical properties allowed the high electrocatalytic performance, high effective active sites and high adsorption capacity of 1-NAP on HIF SnO 2 /GCE. Cyclic voltammetry (CV) results showed that the electrochemical oxidation of 1-NAP obeyed a two-electron transfer process and the electrode reaction was under diffusion control on HIF SnO 2 /GCE. By adopting differential pulse voltammetry (DPV), electrochemical detection of 1-NAP was conducted on HIF SnO 2 /GCE with a limit of detection as low as 5 nM, which was relatively low compared to the literatures. The electrode also illustrated good stability in comparison with those reported value. Satisfactory results were obtained with average recoveries in the range of 99.7–103.6% in the real water sample detection. A promising device for the electrochemical detection of 1-NAP with high sensitivity has therefore been provided.

  15. Development of high sensitivity and high speed large size blank inspection system LBIS

    Science.gov (United States)

    Ohara, Shinobu; Yoshida, Akinori; Hirai, Mitsuo; Kato, Takenori; Moriizumi, Koichi; Kusunose, Haruhiko

    2017-07-01

    The production of high-resolution flat panel displays (FPDs) for mobile phones today requires the use of high-quality large-size photomasks (LSPMs). Organic light emitting diode (OLED) displays use several transistors on each pixel for precise current control and, as such, the mask patterns for OLED displays are denser and finer than the patterns for the previous generation displays throughout the entire mask surface. It is therefore strongly demanded that mask patterns be produced with high fidelity and free of defect. To enable the production of a high quality LSPM in a short lead time, the manufacturers need a high-sensitivity high-speed mask blank inspection system that meets the requirement of advanced LSPMs. Lasertec has developed a large-size blank inspection system called LBIS, which achieves high sensitivity based on a laser-scattering technique. LBIS employs a high power laser as its inspection light source. LBIS's delivery optics, including a scanner and F-Theta scan lens, focus the light from the source linearly on the surface of the blank. Its specially-designed optics collect the light scattered by particles and defects generated during the manufacturing process, such as scratches, on the surface and guide it to photo multiplier tubes (PMTs) with high efficiency. Multiple PMTs are used on LBIS for the stable detection of scattered light, which may be distributed at various angles due to irregular shapes of defects. LBIS captures 0.3mμ PSL at a detection rate of over 99.5% with uniform sensitivity. Its inspection time is 20 minutes for a G8 blank and 35 minutes for G10. The differential interference contrast (DIC) microscope on the inspection head of LBIS captures high-contrast review images after inspection. The images are classified automatically.

  16. Design of a Piezoelectric Accelerometer with High Sensitivity and Low Transverse Effect

    Directory of Open Access Journals (Sweden)

    Bian Tian

    2016-09-01

    Full Text Available In order to meet the requirements of cable fault detection, a new structure of piezoelectric accelerometer was designed and analyzed in detail. The structure was composed of a seismic mass, two sensitive beams, and two added beams. Then, simulations including the maximum stress, natural frequency, and output voltage were carried out. Moreover, comparisons with traditional structures of piezoelectric accelerometer were made. To verify which vibration mode is the dominant one on the acceleration and the space between the mass and glass, mode analysis and deflection analysis were carried out. Fabricated on an n-type single crystal silicon wafer, the sensor chips were wire-bonged to printed circuit boards (PCBs and simply packaged for experiments. Finally, a vibration test was conducted. The results show that the proposed piezoelectric accelerometer has high sensitivity, low resonance frequency, and low transverse effect.

  17. Compton imaging with a highly-segmented, position-sensitive HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Steinbach, T.; Hirsch, R.; Reiter, P.; Birkenbach, B.; Bruyneel, B.; Eberth, J.; Hess, H.; Lewandowski, L. [Universitaet zu Koeln, Institut fuer Kernphysik, Koeln (Germany); Gernhaeuser, R.; Maier, L.; Schlarb, M.; Weiler, B.; Winkel, M. [Technische Universitaet Muenchen, Physik Department, Garching (Germany)

    2017-02-15

    A Compton camera based on a highly-segmented high-purity germanium (HPGe) detector and a double-sided silicon-strip detector (DSSD) was developed, tested, and put into operation; the origin of γ radiation was determined successfully. The Compton camera is operated in two different modes. Coincidences from Compton-scattered γ-ray events between DSSD and HPGe detector allow for best angular resolution; while the high-efficiency mode takes advantage of the position sensitivity of the highly-segmented HPGe detector. In this mode the setup is sensitive to the whole 4π solid angle. The interaction-point positions in the 36-fold segmented large-volume HPGe detector are determined by pulse-shape analysis (PSA) of all HPGe detector signals. Imaging algorithms were developed for each mode and successfully implemented. The angular resolution sensitively depends on parameters such as geometry, selected multiplicity and interaction-point distances. Best results were obtained taking into account the crosstalk properties, the time alignment of the signals and the distance metric for the PSA for both operation modes. An angular resolution between 13.8 {sup circle} and 19.1 {sup circle}, depending on the minimal interaction-point distance for the high-efficiency mode at an energy of 1275 keV, was achieved. In the coincidence mode, an increased angular resolution of 4.6 {sup circle} was determined for the same γ-ray energy. (orig.)

  18. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  19. DDASAC, Double-Precision Differential or Algebraic Sensitivity Analysis

    International Nuclear Information System (INIS)

    Caracotsios, M.; Stewart, W.E.; Petzold, L.

    1997-01-01

    1 - Description of program or function: DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the sensitivities on request. 2 - Method of solution: Reconciliation of initial conditions is done with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithm of Shampine (1987), extended here to differential-algebraic equation systems. The solution is continued with the DASSL predictor- corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase detected and with row scaling of the Jacobian added. The backward-difference formulas for the predictor and corrector are expressed in divide-difference form, and the fixed-leading-coefficient form of the corrector (Jackson and Sacks-Davis 1980, Brenan et al. 1989) is used. Weights for error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations as given by Catacotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigns the work array lengths and the output unit. The machine number range and precision are determined at run time by a

  20. Practical and highly sensitive elemental analysis for aqueous samples containing metal impurities employing electrodeposition on indium-tin oxide film samples and laser-induced shock wave plasma in low-pressure helium gas.

    Science.gov (United States)

    Kurniawan, Koo Hendrik; Pardede, Marincan; Hedwig, Rinda; Abdulmadjid, Syahrun Nur; Lahna, Kurnia; Idris, Nasrullah; Jobiliong, Eric; Suyanto, Hery; Suliyanti, Maria Margaretha; Tjia, May On; Lie, Tjung Jie; Lie, Zener Sukra; Kurniawan, Davy Putra; Kagawa, Kiichiro

    2015-09-01

    We have conducted an experimental study exploring the possible application of laser-induced breakdown spectroscopy (LIBS) for practical and highly sensitive detection of metal impurities in water. The spectrochemical measurements were carried out by means of a 355 nm Nd-YAG laser within N2 and He gas at atmospheric pressures as high as 2 kPa. The aqueous samples were prepared as thin films deposited on indium-tin oxide (ITO) glass by an electrolysis process. The resulting emission spectra suggest that concentrations at parts per billion levels may be achieved for a variety of metal impurities, and it is hence potentially feasible for rapid inspection of water quality in the semiconductor and pharmaceutical industries, as well as for cooling water inspection for possible leakage of radioactivity in nuclear power plants. In view of its relative simplicity, this LIBS equipment offers a practical and less costly alternative to the standard use of inductively coupled plasma-mass spectrometry (ICP-MS) for water samples, and its further potential for in situ and mobile applications.