WorldWideScience

Sample records for model sensitivity experiments

  1. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  2. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  3. Sensitivity experiments with a one-dimensional coupled plume - iceflow model

    Science.gov (United States)

    Beckmann, Johanna; Perette, Mahé; Alexander, David; Calov, Reinhard; Ganopolski, Andrey

    2016-04-01

    Over the last few decades Greenland Ice sheet mass balance has become increasingly negative, caused by enhanced surface melting and speedup of the marine-terminating outlet glaciers at the ice sheet margins. Glaciers speedup has been related, among other factors, to enhanced submarine melting, which in turn is caused by warming of the surrounding ocean and less obviously, by increased subglacial discharge. While ice-ocean processes potentially play an important role in recent and future mass balance changes of the Greenland Ice Sheet, their physical understanding remains poorly understood. In this work we performed numerical experiments with a one-dimensional plume model coupled to a one-dimensional iceflow model. First we investigated the sensitivity of submarine melt rate to changes in ocean properties (ocean temperature and salinity), to the amount of subglacial discharge and to the glacier's tongue geometry itself. A second set of experiments investigates the response of the coupled model, i.e. the dynamical response of the outlet glacier to altered submarine melt, which results in new glacier geometry and updated melt rates.

  4. Sensitivity analysis for CORSOR models simulating fission product release in LOFT-LP-FP-2 severe accident experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club; Pourgol-Mohammad, Mohammad [Sahand Univ. of Technology, Tabriz (Iran, Islamic Republic of). Dept. of Mechanical Engineering; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)

    2017-03-15

    This paper deals with simulation, sensitivity and uncertainty analysis of LP-FP-2 experiment of LOFT test facility. The test facility simulates the major components and system response of a pressurized water reactor during a LOCA. MELCOR code is used for predicting the fission product release from the core fuel elements in LOFT LP-FP-2 experiment. Moreover, sensitivity and uncertainty analysis is performed for different CORSOR models simulating release of fission products in severe accident calculations for nuclear power plants. The calculated values for the fission product release are compared under different modeling options to the experimental data available from the experiment. In conclusion, the performance of 8 CORSOR modeling options is assessed for available modeling alternatives in the code structure.

  5. 12th Rencontres du Vietnam : High Sensitivity Experiments Beyond the Standard Model

    CERN Document Server

    2016-01-01

    The goal of this workshop is to gather researchers, theoreticians, experimentalists and young scientists searching for physics beyond the Standard Model of particle physics using high sensitivity experiments. The standard model has been very successful in describing the particle physics world; the Higgs-Englert-Brout boson discovery is its last major discovery. Complementary to the high energy frontier explored at colliders, real opportunities for discovery exist at the precision frontier, testing fundamental symmetries and tracking small SM deviations.

  6. Model experiments on the sensitization of polyethylene cross-linking of oligobutadienes

    International Nuclear Information System (INIS)

    Brede, O.; Beckert, D.; Hoesselbarth, B.; Specht, W.; Tannert, F.; Wunsch, K.

    1988-01-01

    In presence of ≥ 1 % of 1,2-oligobutadiene the efficiency of the radiation-induced cross-linking of polyethylene was found to be increased in comparison to the pure matrix. Model experiments with solutions of the sensitizer in long chain n-alkanes showed that after addition of alkyl radicals onto the oligobutadiene (reaction with the vinyl groups) the sensitizer forms an own network which is grafted by the alkyl groups. In comparison to this grafting reaction proceeding with G of about 5 the vinyl consumption happened with about the threefold of it indicating a short (intra- and intermolecular) vinyl reaction chain. Pulse radiolysis measurements in solutions of the 1,2-oligobutadiene in n-hexadecane and in molten PE blends resulted in the observation of radical transients of the cross-linking reaction. (author)

  7. Sensitivity of the polypropylene to the strain rate: experiments and modeling

    International Nuclear Information System (INIS)

    Abdul-Latif, A.; Aboura, Z.; Mosleh, L.

    2002-01-01

    Full text.The main goal of this work is first to evaluate experimentally the strain rate dependent deformation of the polypropylene under tensile load; and secondly is to propose a model capable to appropriately describe the mechanical behavior of this material and especially its sensitivity to the strain rate. Several experimental tensile tests are performed at different quasi-static strain rates in the range of 10 -5 s -1 to 10 -1 s -1 . In addition to some relaxation tests are also conducted introducing the strain rate jumping state during testing. Within the framework of elastoviscoplasticity, a phenomenological model is developed for describing the non-linear mechanical behavior of the material under uniaxial loading paths. With the small strain assumption, the sensitivity of the polypropylene to the strain rate being of particular interest in this work, is accordingly taken into account. As a matter of fact, since this model is based on internal state variables, we assume thus that the material sensitivity to the strain rate is governed by the kinematic hardening variable notably its modulus and the accumulated viscoplastic strain. As far as the elastic behavior is concerned, it is noticed that such a behavior is slightly influenced by the employed strain rate rage. For this reason, the elastic behavior is classically determined, i.e. without coupling with the strain rate dependent deformation. It is obvious that the inelastic behavior of the used material is thoroughly dictated by the applied strain rate. Hence, the model parameters are well calibrated utilizing several experimental databases for different strain rates (10 -5 s -1 to 10 -1 s -1 ). Actually, among these experimental results, some experiments related to the relaxation phenomenon and strain rate jumping during testing (increasing or decreasing) are also used in order to more perfect the model parameters identification. To validate the calibrated model parameters, simulation tests are achieved

  8. Neutrino Oscillation Parameter Sensitivity in Future Long-Baseline Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bass, Matthew [Colorado State Univ., Fort Collins, CO (United States)

    2014-01-01

    The study of neutrino interactions and propagation has produced evidence for physics beyond the standard model and promises to continue to shed light on rare phenomena. Since the discovery of neutrino oscillations in the late 1990s there have been rapid advances in establishing the three flavor paradigm of neutrino oscillations. The 2012 discovery of a large value for the last unmeasured missing angle has opened the way for future experiments to search for charge-parity symmetry violation in the lepton sector. This thesis presents an analysis of the future sensitivity to neutrino oscillations in the three flavor paradigm for the T2K, NO A, LBNE, and T2HK experiments. The theory of the three flavor paradigm is explained and the methods to use these theoretical predictions to design long baseline neutrino experiments are described. The sensitivity to the oscillation parameters for each experiment is presented with a particular focus on the search for CP violation and the measurement of the neutrino mass hierarchy. The variations of these sensitivities with statistical considerations and experimental design optimizations taken into account are explored. The effects of systematic uncertainties in the neutrino flux, interaction, and detection predictions are also considered by incorporating more advanced simulations inputs from the LBNE experiment.

  9. Victimization Experiences and the Stabilization of Victim Sensitivity

    Directory of Open Access Journals (Sweden)

    Mario eGollwitzer

    2015-04-01

    Full Text Available People reliably differ in the extent to which they are sensitive to being victimized by others. Importantly, victim sensitivity predicts how people behave in social dilemma situations: Victim-sensitive individuals are less likely to trust others and more likely to behave uncooperatively - especially in socially uncertain situations. This pattern can be explained with the Sensitivity to Mean Intentions (SeMI model, according to which victim sensitivity entails a specific and asymmetric sensitivity to contextual cues that are associated with untrustworthiness. Recent research is largely in line with the model’s prediction, but some issues have remained conceptually unresolved so far. For instance, it is unclear why and how victim sensitivity becomes a stable trait and which developmental and cognitive processes are involved in such stabilization. In the present article, we will discuss the psychological processes that contribute to a stabilization of victim sensitivity within persons, both across the life span (ontogenetic stabilization and across social situations (actual-genetic stabilization. Our theoretical framework starts from the assumption that experiences of being exploited threaten a basic need, the need to trust. This need is so fundamental that experiences that threaten it receive a considerable amount of attention and trigger strong affective reactions. Associative learning processes can then explain (a how certain contextual cues (e.g., facial expressions become conditioned stimuli that elicit equally strong responses, (b why these contextual untrustworthiness cues receive much more attention than, for instance, trustworthiness cues, and (c how these cues shape spontaneous social expectations (regarding other people’s intentions. Finally, avoidance learning can explain why these cognitive processes gradually stabilize and become a trait: the trait which is referred to as victim sensitivity.

  10. Stress Sensitivity, Aberrant Salience, and Threat Anticipation in Early Psychosis: An Experience Sampling Study

    Science.gov (United States)

    Reininghaus, Ulrich; Kempton, Matthew J.; Valmaggia, Lucia; Craig, Tom K. J.; Garety, Philippa; Onyejiaka, Adanna; Gayer-Anderson, Charlotte; So, Suzanne H.; Hubbard, Kathryn; Beards, Stephanie; Dazzan, Paola; Pariante, Carmine; Mondelli, Valeria; Fisher, Helen L.; Mills, John G.; Viechtbauer, Wolfgang; McGuire, Philip; van Os, Jim; Murray, Robin M.; Wykes, Til; Myin-Germeys, Inez; Morgan, Craig

    2016-01-01

    While contemporary models of psychosis have proposed a number of putative psychological mechanisms, how these impact on individuals to increase intensity of psychotic experiences in real life, outside the research laboratory, remains unclear. We aimed to investigate whether elevated stress sensitivity, experiences of aberrant novelty and salience, and enhanced anticipation of threat contribute to the development of psychotic experiences in daily life. We used the experience sampling method (ESM) to assess stress, negative affect, aberrant salience, threat anticipation, and psychotic experiences in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls with no personal or family history of psychosis. Linear mixed models were used to account for the multilevel structure of ESM data. In all 3 groups, elevated stress sensitivity, aberrant salience, and enhanced threat anticipation were associated with an increased intensity of psychotic experiences. However, elevated sensitivity to minor stressful events (χ2 = 6.3, P = 0.044), activities (χ2 = 6.7, P = 0.036), and areas (χ2 = 9.4, P = 0.009) and enhanced threat anticipation (χ2 = 9.3, P = 0.009) were associated with more intense psychotic experiences in FEP individuals than controls. Sensitivity to outsider status (χ2 = 5.7, P = 0.058) and aberrantly salient experiences (χ2 = 12.3, P = 0.002) were more strongly associated with psychotic experiences in ARMS individuals than controls. Our findings suggest that stress sensitivity, aberrant salience, and threat anticipation are important psychological processes in the development of psychotic experiences in daily life in the early stages of the disorder. PMID:26834027

  11. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  12. Stress Sensitivity, Aberrant Salience, and Threat Anticipation in Early Psychosis: An Experience Sampling Study.

    Science.gov (United States)

    Reininghaus, Ulrich; Kempton, Matthew J; Valmaggia, Lucia; Craig, Tom K J; Garety, Philippa; Onyejiaka, Adanna; Gayer-Anderson, Charlotte; So, Suzanne H; Hubbard, Kathryn; Beards, Stephanie; Dazzan, Paola; Pariante, Carmine; Mondelli, Valeria; Fisher, Helen L; Mills, John G; Viechtbauer, Wolfgang; McGuire, Philip; van Os, Jim; Murray, Robin M; Wykes, Til; Myin-Germeys, Inez; Morgan, Craig

    2016-05-01

    While contemporary models of psychosis have proposed a number of putative psychological mechanisms, how these impact on individuals to increase intensity of psychotic experiences in real life, outside the research laboratory, remains unclear. We aimed to investigate whether elevated stress sensitivity, experiences of aberrant novelty and salience, and enhanced anticipation of threat contribute to the development of psychotic experiences in daily life. We used the experience sampling method (ESM) to assess stress, negative affect, aberrant salience, threat anticipation, and psychotic experiences in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls with no personal or family history of psychosis. Linear mixed models were used to account for the multilevel structure of ESM data. In all 3 groups, elevated stress sensitivity, aberrant salience, and enhanced threat anticipation were associated with an increased intensity of psychotic experiences. However, elevated sensitivity to minor stressful events (χ(2)= 6.3,P= 0.044), activities (χ(2)= 6.7,P= 0.036), and areas (χ(2)= 9.4,P= 0.009) and enhanced threat anticipation (χ(2)= 9.3,P= 0.009) were associated with more intense psychotic experiences in FEP individuals than controls. Sensitivity to outsider status (χ(2)= 5.7,P= 0.058) and aberrantly salient experiences (χ(2)= 12.3,P= 0.002) were more strongly associated with psychotic experiences in ARMS individuals than controls. Our findings suggest that stress sensitivity, aberrant salience, and threat anticipation are important psychological processes in the development of psychotic experiences in daily life in the early stages of the disorder. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.

  13. Lessening Sensitivity: Student Experiences of Teaching and Learning Sensitive Issues

    Science.gov (United States)

    Lowe, Pam

    2015-01-01

    Despite growing interest in learning and teaching as emotional activities, there is still very little research on experiences of sensitive issues. Using qualitative data from students from a range of social science disciplines, this study investigates student's experiences. The paper highlights how, although they found it difficult and distressing…

  14. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  15. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  16. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  17. An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2004-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a

  18. Sensitivity studies and a simple ozone perturbation experiment with a truncated two-dimensional model of the stratosphere

    Science.gov (United States)

    Stordal, Frode; Garcia, Rolando R.

    1987-01-01

    The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.

  19. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  20. ATLAS MDT neutron sensitivity measurement and modeling

    International Nuclear Information System (INIS)

    Ahlen, S.; Hu, G.; Osborne, D.; Schulz, A.; Shank, J.; Xu, Q.; Zhou, B.

    2003-01-01

    The sensitivity of the ATLAS precision muon detector element, the Monitored Drift Tube (MDT), to fast neutrons has been measured using a 5.5 MeV Van de Graaff accelerator. The major mechanism of neutron-induced signals in the drift tubes is the elastic collisions between the neutrons and the gas nuclei. The recoil nuclei lose kinetic energy in the gas and produce the signals. By measuring the ATLAS drift tube neutron-induced signal rate and the total neutron flux, the MDT neutron signal sensitivities were determined for different drift gas mixtures and for different neutron beam energies. We also developed a sophisticated simulation model to calculate the neutron-induced signal rate and signal spectrum for ATLAS MDT operation configurations. The calculations agree with the measurements very well. This model can be used to calculate the neutron sensitivities for different gaseous detectors and for neutron energies above those available to this experiment

  1. Context Sensitive Modeling of Cancer Drug Sensitivity.

    Directory of Open Access Journals (Sweden)

    Bo-Juen Chen

    Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.

  2. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  4. Projected sensitivity of the SuperCDMS SNOLAB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Agnese, R.; Anderson, A. J.; Aramaki, T.; Arnquist, I.; Baker, W.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Borgland, A.; Bowles, M. A.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Calkins, R.; Cartaro, C.; Cerdeño, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Gerbier, G.; Ghaith, M.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hong, Z.; Hoppe, E.; Hsu, L.; Huber, M. E.; Iyer, V.; Jardin, D.; Jastram, A.; Kelsey, M. H.; Kennedy, A.; Kubik, A.; Kurinsky, N. A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; Mandic, V.; Mast, N.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Orrell, J. L.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Poudel, S.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Roberts, A.; Robinson, A. E.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Serfass, B.; Speller, D.; Stein, M.; Street, J.; Tanaka, H. A.; Toback, D.; Underwood, R.; Villano, A. N.; von Krosigk, B.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, X.; Zhao, X.

    2017-04-07

    SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass (< 10 GeV/c$^2$) particles that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~ 1 x 10$^{-43}$ cm$^2$ for a dark matter particle mass of 1 GeV/c$^2$, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced $^{3}$H and naturally occurring $^{32}$Si will be present in the detectors at some level. Even if these backgrounds are x10 higher than expected, the science reach of the HV detectors would be over three orders of magnitude beyond current results for a dark matter mass of 1 GeV/c$^2$. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particle masses (> 5 GeV/c$^2$). The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the "neutrino floor", where coherent scatters of solar neutrinos become a limiting background.

  5. Projected sensitivity of the SuperCDMS SNOLAB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Agnese, R.; Anderson, A. J.; Aramaki, T.; Arnquist, I.; Baker, W.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Borgland, A.; Bowles, M. A.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Calkins, R.; Cartaro, C.; Cerdeño, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Gerbier, G.; Ghaith, M.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hong, Z.; Hoppe, E.; Hsu, L.; Huber, M. E.; Iyer, V.; Jardin, D.; Jastram, A.; Kelsey, M. H.; Kennedy, A.; Kubik, A.; Kurinsky, N. A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; Mandic, V.; Mast, N.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Orrell, J. L.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Poudel, S.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Roberts, A.; Robinson, A. E.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Serfass, B.; Speller, D.; Stein, M.; Street, J.; Tanaka, H. A.; Toback, D.; Underwood, R.; Villano, A. N.; von Krosigk, B.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, X.; Zhao, X.

    2017-04-01

    SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass particles (with masses ≤ 10 GeV/c^2) that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~1×10^-43 cm^2 for a dark matter particle mass of 1 GeV/c^2, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low-energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced H-3 and naturally occurring Si-32 will be present in the detectors at some level. Even if these backgrounds are 10 times higher than expected, the science reach of the HV detectors would be over 3 orders of magnitude beyond current results for a dark matter mass of 1 GeV/c^2. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particles with masses ≳5 GeV/c^2. The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the “neutrino floor,” where coherent scatters of solar neutrinos become a limiting background.

  6. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  7. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    Science.gov (United States)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  8. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  9. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  10. Quick, sensitive serial NMR experiments with Radon transform.

    Science.gov (United States)

    Dass, Rupashree; Kasprzak, Paweł; Kazimierczuk, Krzysztof

    2017-09-01

    The Radon transform is a potentially powerful tool for processing the data from serial spectroscopic experiments. It makes it possible to decode the rate at which frequencies of spectral peaks shift under the effect of changing conditions, such as temperature, pH, or solvent. In this paper we show how it also improves speed and sensitivity, especially in multidimensional experiments. This is particularly important in the case of low-sensitivity techniques, such as NMR spectroscopy. As an example, we demonstrate how Radon transform processing allows serial measurements of 15 N-HSQC spectra of unlabelled peptides that would otherwise be infeasible. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Vantage Sensitivity: Environmental Sensitivity to Positive Experiences as a Function of Genetic Differences.

    Science.gov (United States)

    Pluess, Michael

    2017-02-01

    A large number of gene-environment interaction studies provide evidence that some people are more likely to be negatively affected by adverse experiences as a function of specific genetic variants. However, such "risk" variants are surprisingly frequent in the population. Evolutionary analysis suggests that genetic variants associated with increased risk for maladaptive development under adverse environmental conditions are maintained in the population because they are also associated with advantages in response to different contextual conditions. These advantages may include (a) coexisting genetic resilience pertaining to other adverse influences, (b) a general genetic susceptibility to both low and high environmental quality, and (c) a coexisting propensity to benefit disproportionately from positive and supportive exposures, as reflected in the recent framework of vantage sensitivity. After introducing the basic properties of vantage sensitivity and highlighting conceptual similarities and differences with diathesis-stress and differential susceptibility patterns of gene-environment interaction, selected and recent empirical evidence for the notion of vantage sensitivity as a function of genetic differences is reviewed. The unique contribution that the new perspective of vantage sensitivity may make to our understanding of social inequality will be discussed after suggesting neurocognitive and molecular mechanisms hypothesized to underlie the propensity to benefit disproportionately from benevolent experiences. © 2015 Wiley Periodicals, Inc.

  12. Sensitivities to neutrino electromagnetic properties at the TEXONO experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kosmas, T.S., E-mail: hkosmas@uoi.gr [Division of Theoretical Physics, University of Ioannina, GR 45110 Ioannina (Greece); Miranda, O.G., E-mail: omr@fis.cinvestav.mx [Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN, Apdo. Postal 14-740 07000 Mexico, DF (Mexico); Papoulias, D.K., E-mail: dimpap@cc.uoi.gr [Division of Theoretical Physics, University of Ioannina, GR 45110 Ioannina (Greece); AHEP Group, Instituto de Física Corpuscular – C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, C/Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain); Tórtola, M., E-mail: mariam@ific.uv.es [AHEP Group, Instituto de Física Corpuscular – C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, C/Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain); Valle, J.W.F. [AHEP Group, Instituto de Física Corpuscular – C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, C/Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain)

    2015-11-12

    The possibility of measuring neutral-current coherent elastic neutrino–nucleus scattering (CENNS) at the TEXONO experiment has opened high expectations towards probing exotic neutrino properties. Focusing on low threshold Germanium-based targets with kg-scale mass, we find a remarkable efficiency not only for detecting CENNS events due to the weak interaction, but also for probing novel electromagnetic neutrino interactions. Specifically, we demonstrate that such experiments are complementary in performing precision Standard Model tests as well as in shedding light on sub-leading effects due to neutrino magnetic moment and neutrino charge radius. This work employs realistic nuclear structure calculations based on the quasi-particle random phase approximation (QRPA) and takes into consideration the crucial quenching effect corrections. Such a treatment, in conjunction with a simple statistical analysis, shows that the attainable sensitivities are improved by one order of magnitude as compared to previous studies.

  13. Vantage sensitivity: individual differences in response to positive experiences.

    Science.gov (United States)

    Pluess, Michael; Belsky, Jay

    2013-07-01

    The notion that some people are more vulnerable to adversity as a function of inherent risk characteristics is widely embraced in most fields of psychology. This is reflected in the popularity of the diathesis-stress framework, which has received a vast amount of empirical support over the years. Much less effort has been directed toward the investigation of endogenous factors associated with variability in response to positive influences. One reason for the failure to investigate individual differences in response to positive experiences as a function of endogenous factors may be the absence of adequate theoretical frameworks. According to the differential-susceptibility hypothesis, individuals generally vary in their developmental plasticity regardless of whether they are exposed to negative or positive influences--a notion derived from evolutionary reasoning. On the basis of this now well-supported proposition, we advance herein the new concept of vantage sensitivity, reflecting variation in response to exclusively positive experiences as a function of individual endogenous characteristics. After distinguishing vantage sensitivity from theoretically related concepts of differential-susceptibility and resilience, we review some recent empirical evidence for vantage sensitivity featuring behavioral, physiological, and genetic factors as moderators of a wide range of positive experiences ranging from family environment and psychotherapy to educational intervention. Thereafter, we discuss genetic and environmental factors contributing to individual differences in vantage sensitivity, potential mechanisms underlying vantage sensitivity, and practical implications. 2013 APA, all rights reserved

  14. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  15. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer Hougaard

    2012-01-01

    Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...... the underlying neural encoding of an experiment defining multiple brain states. In this relation there is a great desire for the researcher to generate brain maps, that highlight brain locations of importance to the classifiers decisions. Based on sensitivity analysis, we develop further procedures for model...... direction the individual locations influence the classification. We illustrate the visualization procedure on a real data from a simple functional magnetic resonance imaging experiment....

  16. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  17. Development and Sensitivity Analysis of a Fully Kinetic Model of Sequential Reductive Dechlorination in Groundwater

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup

    2011-01-01

    experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most...

  18. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    storage systems, where the actual values of the data is not relevant for the behavior of the system. For many systems the values are important. For instance the control flow of the system can be dependent on the input values. We call this type of system data sensitive, as the execution is sensitive...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...

  19. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  20. Validation of ASTEC v2.0 corium jet fragmentation model using FARO experiments

    International Nuclear Information System (INIS)

    Hermsmeyer, S.; Pla, P.; Sangiorgi, M.

    2015-01-01

    Highlights: • Model validation base extended to six FARO experiments. • Focus on the calculation of the fragmented particle diameter. • Capability and limits of the ASTEC fragmentation model. • Sensitivity analysis of model outputs. - Abstract: ASTEC is an integral code for the prediction of Severe Accidents in Nuclear Power Plants. As such, it needs to cover all physical processes that could occur during accident progression, yet keeping its models simple enough for the ensemble to stay manageable and produce results within an acceptable time. The present paper is concerned with the validation of the Corium jet fragmentation model of ASTEC v2.0 rev3 by means of a selection of six experiments carried out within the FARO facility. The different conditions applied within these six experiments help to analyse the model behaviour in different situations and to expose model limits. In addition to comparing model outputs with experimental measurements, sensitivity analyses are applied to investigate the model. Results of the paper are (i) validation runs, accompanied by an identification of situations where the implemented fragmentation model does not match the experiments well, and discussion of results; (ii) its special attention to the models calculating the diameter of fragmented particles, the identification of a fault in one model implemented, and the discussion of simplification and ad hoc modification to improve the model fit; and, (iii) an investigation of the sensitivity of predictions towards inputs and parameters. In this way, the paper offers a thorough investigation of the merit and limitation of the fragmentation model used in ASTEC

  1. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  2. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  3. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  4. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  5. The role of soil moisture in land surface-atmosphere coupling: climate model sensitivity experiments over India

    Science.gov (United States)

    Williams, Charles; Turner, Andrew

    2015-04-01

    It is generally acknowledged that anthropogenic land use changes, such as a shift from forested land into irrigated agriculture, may have an impact on regional climate and, in particular, rainfall patterns in both time and space. India provides an excellent example of a country in which widespread land use change has occurred during the last century, as the country tries to meet its growing demand for food. Of primary concern for agriculture is the Indian summer monsoon (ISM), which displays considerable seasonal and subseasonal variability. Although it is evident that changing rainfall variability will have a direct impact on land surface processes (such as soil moisture variability), the reverse impact is less well understood. However, the role of soil moisture in the coupling between the land surface and atmosphere needs to be properly explored before any potential impact of changing soil moisture variability on ISM rainfall can be understood. This paper attempts to address this issue, by conducting a number of sensitivity experiments using a state-of-the-art climate model from the UK Meteorological Office Hadley Centre: HadGEM2. Several experiments are undertaken, with the only difference between them being the extent to which soil moisture is coupled to the atmosphere. Firstly, the land surface is fully coupled to the atmosphere, globally (as in standard model configurations); secondly, the land surface is entirely uncoupled from the atmosphere, again globally, with soil moisture values being prescribed on a daily basis; thirdly, the land surface is uncoupled from the atmosphere over India but fully coupled elsewhere; and lastly, vice versa (i.e. the land surface is coupled to the atmosphere over India but uncoupled elsewhere). Early results from this study suggest certain 'hotspot' regions where the impact of soil moisture coupling/uncoupling may be important, and many of these regions coincide with previous studies. Focusing on the third experiment, i

  6. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  7. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  8. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  9. Sensitivity and uncertainty analyses of the HCLL mock-up experiment

    International Nuclear Information System (INIS)

    Leichtle, D.; Fischer, U.; Kodeli, I.; Perel, R.L.; Klix, A.; Batistoni, P.; Villari, R.

    2010-01-01

    Within the European Fusion Technology Programme dedicated computational methods, tools and data have been developed and validated for sensitivity and uncertainty analyses of fusion neutronics experiments. The present paper is devoted to this kind of analyses on the recent neutronics experiment on a mock-up of the Helium-Cooled Lithium Lead Test Blanket Module for ITER at the Frascati neutron generator. They comprise both probabilistic and deterministic methodologies for the assessment of uncertainties of nuclear responses due to nuclear data uncertainties and their sensitivities to the involved reaction cross-section data. We have used MCNP and MCSEN codes in the Monte Carlo approach and DORT and SUSD3D in the deterministic approach for transport and sensitivity calculations, respectively. In both cases JEFF-3.1 and FENDL-2.1 libraries for the transport data and mainly ENDF/B-VI.8 and SCALE6.0 libraries for the relevant covariance data have been used. With a few exceptions, the two different methodological approaches were shown to provide consistent results. A total nuclear data related uncertainty in the range of 1-2% (1σ confidence level) was assessed for the tritium production in the HCLL mock-up experiment.

  10. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    Science.gov (United States)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  11. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  12. VUV-sensitive silicon-photomultipliers for the nEXO-experiment

    Energy Technology Data Exchange (ETDEWEB)

    Wrede, Gerrit; Bayerlein, Reimund; Hufschmidt, Patrick; Jamil, Ako; Schneider, Judith; Wagenpfeil, Michael; Ziegler, Tobias; Hoessl, Juergen; Anton, Gisela; Michel, Thilo [ECAP, Friedrich-Alexander-Universitaet Erlangen-Nuernberg (Germany)

    2016-07-01

    The nEXO (next Enriched Xenon Observatory) experiment will search for the neutrinoless double beta decay of Xe-136 with a liquid xenon TPC (Time ProjectionChamber). The sensitivity of the experiment is related to the energy resolution, which itself depends on the accuracies of the measurements of the amount of drifting electrons and the number of scintillation photons with their wavelength being in the vacuum ultraviolet band. Silicon Photomultipliers (SiPM) shall be used for the detection of the scintillation light, since they can be produced extremely radiopure. Commercially available SiPM do not fulfill all requirements of the nEXO experiment, thus a dedicated development is necessary. To characterize the silicon photomultipliers, we have built a test apparatus for xenon liquefaction, in which a VUV-sensitive photomultiplier tube can be operated together with the SiPM. In this contribution we present our apparatus for the SiPM characterization measurements and our latest results on the test of the silicon photomultipliers for the detection of xenon scintillation light.

  13. Stress Sensitivity and Psychotic Experiences in 39 Low- and Middle-Income Countries.

    Science.gov (United States)

    DeVylder, Jordan E; Koyanagi, Ai; Unick, Jay; Oh, Hans; Nam, Boyoung; Stickley, Andrew

    2016-11-01

    Stress has a central role in most theories of psychosis etiology, but the relation between stress and psychosis has rarely been examined in large population-level data sets, particularly in low- and middle-income countries. We used data from 39 countries in the World Health Survey (n = 176 934) to test the hypothesis that stress sensitivity would be associated with psychotic experiences, using logistic regression analyses. Respondents in low-income countries reported higher stress sensitivity (P countries. Greater stress sensitivity was associated with increased odds for psychotic experiences, even when adjusted for co-occurring anxiety and depressive symptoms: adjusted odds ratio (95% CI) = 1.17 (1.15-1.19) per unit increase in stress sensitivity (range 2-10). This association was consistent and significant across nearly every country studied, and translated into a difference in psychotic experience prevalence ranging from 6.4% among those with the lowest levels of stress sensitivity up to 22.2% among those with the highest levels. These findings highlight the generalizability of the association between psychosis and stress sensitivity in the largest and most globally representative community-level sample to date, and support the targeting of stress sensitivity as a potential component of individual- and population-level interventions for psychosis. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  15. Sensitivity of the Humboldt current system to global warming: a downscaling experiment of the IPSL-CM4 model

    Energy Technology Data Exchange (ETDEWEB)

    Echevin, Vincent [LOCEAN, Paris (France); Goubanova, Katerina; Dewitte, Boris [LEGOS, Toulouse (France); IMARPE, IGP, LEGOS, Lima (Peru); Belmadani, Ali [LOCEAN, Paris (France); LEGOS, Toulouse (France); University of Hawaii at Manoa, IPRC, International Pacific Research Center, SOEST, Honolulu, Hawaii (United States)

    2012-02-15

    The impact of climate warming on the seasonal variability of the Humboldt Current system ocean dynamics is investigated. The IPSL-CM4 large scale ocean circulation resulting from two contrasted climate scenarios, the so-called Preindustrial and quadrupling CO{sub 2}, are downscaled using an eddy-resolving regional ocean circulation model. The intense surface heating by the atmosphere in the quadrupling CO{sub 2} scenario leads to a strong increase of the surface density stratification, a thinner coastal jet, an enhanced Peru-Chile undercurrent, and an intensification of nearshore turbulence. Upwelling rates respond quasi-linearly to the change in wind stress associated with anthropogenic forcing, and show a moderate decrease in summer off Peru and a strong increase off Chile. Results from sensitivity experiments show that a 50% wind stress increase does not compensate for the surface warming resulting from heat flux forcing and that the associated mesoscale turbulence increase is a robust feature. (orig.)

  16. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    Directory of Open Access Journals (Sweden)

    L. A. Lee

    2011-12-01

    Full Text Available Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process

  17. About the use of rank transformation in sensitivity analysis of model output

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Sobol', Ilya M

    1995-01-01

    Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis

  18. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  19. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  20. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  1. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  2. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Detection of C',Cα correlations in proteins using a new time- and sensitivity-optimal experiment

    International Nuclear Information System (INIS)

    Lee, Donghan; Voegeli, Beat; Pervushin, Konstantin

    2005-01-01

    Sensitivity- and time-optimal experiment, called COCAINE (CO-CA In- and aNtiphase spectra with sensitivity Enhancement), is proposed to correlate chemical shifts of 13 C' and 13 C α spins in proteins. A comparison of the sensitivity and duration of the experiment with the corresponding theoretical unitary bounds shows that the COCAINE experiment achieves maximum possible transfer efficiency in the shortest possible time, and in this sense the sequence is optimal. Compared to the standard HSQC, the COCAINE experiment delivers a 2.7-fold gain in sensitivity. This newly proposed experiment can be used for assignment of backbone resonances in large deuterated proteins effectively bridging 13 C' and 13 C α resonances in adjacent amino acids. Due to the spin-state selection employed, the COCAINE experiment can also be used for efficient measurements of one-bond couplings (e.g. scalar and residual dipolar couplings) in any two-spin system (e.g. the N/H in the backbone of protein)

  4. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: Experiments and computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Bøtter-Jensen, L.; Agersnap Larsen, N.

    1996-01-01

    believed to be occurring. The computer model used includes both shallow and deep ('hard-to-bleach') traps, OSL ('easy-to-bleach') traps, and radiative and non-radiative recombination centres. The model has previously been used successfully to account for sensitivity changes in quartz due to thermal......As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes...... annealing. The simulations are able to reproduce qualitatively the main features of the experimental results including sensitivity changes as a function of reuse, and their dependence upon bleaching time and laboratory dose. The sensitivity changes are believed to be the result of a combination of shallow...

  5. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Science.gov (United States)

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.

  6. The influence of cirrus cloud-radiative forcing on climate and climate sensitivity in a general circulation model

    International Nuclear Information System (INIS)

    Lohmann, U.; Roeckner, E.

    1994-01-01

    Six numerical experiments have been performed with a general circulation model (GCM) to study the influence of high-level cirrus clouds and global sea surface temperature (SST) perturbations on climate and climate sensitivity. The GCM used in this investigation is the third-generation ECHAM3 model developed jointly by the Max-Planck-Institute for Meteorology and the University of Hamburg. It is shown that the model is able to reproduce many features of the observed cloud-radiative forcing with considerable skill, such as the annual mean distribution, the response to seasonal forcing and the response to observed SST variations in the equatorial Pacific. In addition to a reference experiment where the cirrus emissivity is computed as a function of the cloud water content, two sensitivity experiments have been performed in which the cirrus emissivity is either set to zero everywhere above 400 hPa ('transparent cirrus') or set to one ('black cirrus'). These three experiments are repeated identically, except for prescribing a globally uniform SST warming of 4 K. (orig.)

  7. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  8. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    Science.gov (United States)

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow

  9. Uncertainty and sensitivity analysis of biokinetic models for radiopharmaceuticals used in nuclear medicine

    International Nuclear Information System (INIS)

    Li, W. B.; Hoeschen, C.

    2010-01-01

    Mathematical models for kinetics of radiopharmaceuticals in humans were developed and are used to estimate the radiation absorbed dose for patients in nuclear medicine by the International Commission on Radiological Protection and the Medical Internal Radiation Dose (MIRD) Committee. However, due to the fact that the residence times used were derived from different subjects, partially even with different ethnic backgrounds, a large variation in the model parameters propagates to a high uncertainty of the dose estimation. In this work, a method was developed for analysing the uncertainty and sensitivity of biokinetic models that are used to calculate the residence times. The biokinetic model of 18 F-FDG (FDG) developed by the MIRD Committee was analysed by this developed method. The sources of uncertainty of all model parameters were evaluated based on the experiments. The Latin hypercube sampling technique was used to sample the parameters for model input. Kinetic modelling of FDG in humans was performed. Sensitivity of model parameters was indicated by combining the model input and output, using regression and partial correlation analysis. The transfer rate parameter of plasma to other tissue fast is the parameter with the greatest influence on the residence time of plasma. Optimisation of biokinetic data acquisition in the clinical practice by exploitation of the sensitivity of model parameters obtained in this study is discussed. (authors)

  10. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  12. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  13. Performance of high-resolution position-sensitive detectors developed for storage-ring decay experiments

    International Nuclear Information System (INIS)

    Yamaguchi, T.; Suzaki, F.; Izumikawa, T.; Miyazawa, S.; Morimoto, K.; Suzuki, T.; Tokanai, F.; Furuki, H.; Ichihashi, N.; Ichikawa, C.; Kitagawa, A.; Kuboki, T.; Momota, S.; Nagae, D.; Nagashima, M.; Nakamura, Y.; Nishikiori, R.; Niwa, T.; Ohtsubo, T.; Ozawa, A.

    2013-01-01

    Highlights: • Position-sensitive detectors were developed for storage-ring decay spectroscopy. • Fiber scintillation and silicon strip detectors were tested with heavy ion beams. • A new fiber scintillation detector showed an excellent position resolution. • Position and energy detection by silicon strip detectors enable full identification. -- Abstract: As next generation spectroscopic tools, heavy-ion cooler storage rings will be a unique application of highly charged RI beam experiments. Decay spectroscopy of highly charged rare isotopes provides us important information relevant to the stellar conditions, such as for the s- and r-process nucleosynthesis. In-ring decay products of highly charged RI will be momentum-analyzed and reach a position-sensitive detector set-up located outside of the storage orbit. To realize such in-ring decay experiments, we have developed and tested two types of high-resolution position-sensitive detectors: silicon strips and scintillating fibers. The beam test experiments resulted in excellent position resolutions for both detectors, which will be available for future storage-ring experiments

  14. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Directory of Open Access Journals (Sweden)

    Sarah A Gerson

    Full Text Available In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early music perception and cognition.

  15. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    Science.gov (United States)

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  16. Operational experience with model-based steering in the SLC linac

    International Nuclear Information System (INIS)

    Thompson, K.A.; Himel, T.; Moore, S.; Sanchez-Chopitea, L.; Shoaee, H.

    1989-03-01

    Operational experience with model-driven steering in the linac of the Stanford Linear Collider is discussed. Important issues include two-beam steering, sensitivity of algorithms to faulty components, sources of disagreement with the model, and the effects of the finite resolution of beam position monitors. Methods developed to make the steering algorithms more robust in the presence of such complications are also presented. 5 refs., 1 fig

  17. Sensitivity Evaluation of the Daily Thermal Predictions of the AGR-1 Experiment in the Advanced Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Grant Hawkes; James Sterbentz; John Maki

    2011-05-01

    A temperature sensitivity evaluation has been performed for the AGR-1 fuel experiment on an individual capsule. A series of cases were compared to a base case by varying different input parameters into the ABAQUS finite element thermal model. These input parameters were varied by ±10% to show the temperature sensitivity to each parameter. The most sensitive parameters are the outer control gap distance, heat rate in the fuel compacts, and neon gas fraction. Thermal conductivity of the compacts and graphite holder were in the middle of the list for sensitivity. The smallest effects were for the emissivities of the stainless steel, graphite, and thru tubes. Sensitivity calculations were also performed varying with fluence. These calculations showed a general temperature rise with an increase in fluence. This is a result of the thermal conductivity of the fuel compacts and graphite holder decreasing with fluence.

  18. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: experiments and computer simulations

    International Nuclear Information System (INIS)

    McKeever, S.W.S.; Oklahoma State Univ., Stillwater, OK; Boetter-Jensen, L.; Agersnap Larsen, N.; Mejdahl, V.; Poolton, N.R.J.

    1996-01-01

    As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes believed to be occurring. The computer model used includes both shallow and deep ('hard-to-bleach') traps, OSL ('easy-to-bleach') traps, and radiative and non-radiative recombination centres. The model has previously been used successfully to account for sensitivity changes in quartz due to thermal annealing. The simulations are able to reproduce qualitatively the main features of the experimental results including sensitivity changes as a function of re-use, and their dependence upon bleaching time and laboratory dose. The sensitivity changes are believed to be the result of a combination of shallow trap and deep trap effects. (author)

  19. Global sensitivity analysis of GEOS-Chem modeled ozone and hydrogen oxides during the INTEX campaigns

    Directory of Open Access Journals (Sweden)

    K. E. Christian

    2018-02-01

    Full Text Available Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3, hydroxyl radical (OH, and hydroperoxyl radical (HO2 mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.

  20. Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods

    Directory of Open Access Journals (Sweden)

    Lee Sael

    2013-10-01

    Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.

  1. The Sensitivity of State Differential Game Vessel Traffic Model

    Directory of Open Access Journals (Sweden)

    Lisowski Józef

    2016-04-01

    Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.

  2. Uncertainty and Sensitivity Analysis of Filtration Models for Non-Fickian transport and Hyperexponential deposition

    DEFF Research Database (Denmark)

    Yuan, Hao; Sin, Gürkan

    2011-01-01

    Uncertainty and sensitivity analyses are carried out to investigate the predictive accuracy of the filtration models for describing non-Fickian transport and hyperexponential deposition. Five different modeling approaches, involving the elliptic equation with different types of distributed...... filtration coefficients and the CTRW equation expressed in Laplace space, are selected to simulate eight experiments. These experiments involve both porous media and colloid-medium interactions of different heterogeneity degrees. The uncertainty of elliptic equation predictions with distributed filtration...... coefficients is larger than that with a single filtration coefficient. The uncertainties of model predictions from the elliptic equation and CTRW equation in Laplace space are minimal for solute transport. Higher uncertainties of parameter estimation and model outputs are observed in the cases with the porous...

  3. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  4. Climate and climate change sensitivity to model configuration in the Canadian RCM over North America

    Energy Technology Data Exchange (ETDEWEB)

    De Elia, Ramon [Ouranos Consortium on Regional Climate and Adaptation to Climate Change, Montreal (Canada); Centre ESCER, Univ. du Quebec a Montreal (Canada); Cote, Helene [Ouranos Consortium on Regional Climate and Adaptation to Climate Change, Montreal (Canada)

    2010-06-15

    Climate simulations performed with Regional Climate Models (RCMs) have been found to show sensitivity to parameter settings. The origin, consequences and interpretations of this sensitivity are varied, but it is generally accepted that sensitivity studies are very important for a better understanding and a more cautious manipulation of RCM results. In this work we present sensitivity experiments performed on the simulated climate produced by the Canadian Regional Climate Model (CRCM). In addition to climate sensitivity to parameter variation, we analyse the impact of the sensitivity on the climate change signal simulated by the CRCM. These studies are performed on 30-year long simulated present and future seasonal climates, and we have analysed the effect of seven kinds of configuration modifications: CRCM initial conditions, lateral boundary condition (LBC), nesting update interval, driving Global Climate Model (GCM), driving GCM member, large-scale spectral nudging, CRCM version, and domain size. Results show that large changes in both the driving model and the CRCM physics seem to be the main sources of sensitivity for the simulated climate and the climate change. Their effects dominate those of configuration issues, such as the use or not of large-scale nudging, domain size, or LBC update interval. Results suggest that in most cases, differences between simulated climates for different CRCM configurations are not transferred to the estimated climate change signal: in general, these tend to cancel each other out. (orig.)

  5. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  6. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    Science.gov (United States)

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  7. The sensitivity of catchment runoff models to rainfall data at different spatial scales

    Directory of Open Access Journals (Sweden)

    V. A. Bell

    2000-01-01

    Full Text Available The sensitivity of catchment runoff models to rainfall is investigated at a variety of spatial scales using data from a dense raingauge network and weather radar. These data form part of the HYREX (HYdrological Radar EXperiment dataset. They encompass records from 49 raingauges over the 135 km2 Brue catchment in south-west England together with 2 and 5 km grid-square radar data. Separate rainfall time-series for the radar and raingauge data are constructed on 2, 5 and 10 km grids, and as catchment average values, at a 15 minute time-step. The sensitivity of the catchment runoff models to these grid scales of input data is evaluated on selected convective and stratiform rainfall events. Each rainfall time-series is used to produce an ensemble of modelled hydrographs in order to investigate this sensitivity. The distributed model is shown to be sensitive to the locations of the raingauges within the catchment and hence to the spatial variability of rainfall over the catchment. Runoff sensitivity is strongest during convective rainfall when a broader spread of modelled hydrographs results, with twice the variability of that arising from stratiform rain. Sensitivity to rainfall data and model resolution is explored and, surprisingly, best performance is obtained using a lower resolution of rainfall data and model. Results from the distributed catchment model, the Simple Grid Model, are compared with those obtained from a lumped model, the PDM. Performance from the distributed model is found to be only marginally better during stratiform rain (R2 of 0.922 compared to 0.911 but significantly better during convective rain (R2 of 0.953 compared to 0.909. The improved performance from the distributed model can, in part, be accredited to the excellence of the dense raingauge network which would not be the norm for operational flood warning systems. In the final part of the paper, the effect of rainfall resolution on the performance of the 2 km distributed

  8. Some sensitivities of a coupled ocean-atmosphere GCM

    International Nuclear Information System (INIS)

    Stockdale, T.; Latif, M.; Burgers, G.; Wolff, J.O.

    1994-01-01

    A coupled ocean-atmosphere GCM is being developed for use in seasonal forecasting. As part of the development work, a number of experiments have been made to explore some of the sensitivities of the coupled model system. The overall heat balance of the tropics is found to be very sensitive to convective cloud cover. Adjusting the cloud parameterization to produce stable behaviour of the coupled model also leads to better agreement between model radiative fluxes and satellite data. A further sensitivity is seen to changes in low-level marine stratus, which is under-represented in the initial model experiments. An increase in this cloud in the coupled model produces a small improvement in both the global mean state and the phase of the east Pacific annual cycle. The computational expense of investigating such small changes is emphasized. An indication of model sensitivity to surface albedo is also presented. The sensitivity of the coupled GCM to initial conditions is investigated. The model is very sensitive, with tiny perturbations able to determine El Nino or non-El Nino conditions just six months later. This large sensitivity may be related to the relatively weak amplitude of the model ENSO cycle. (orig.)

  9. The mobilisation model and parameter sensitivity

    International Nuclear Information System (INIS)

    Blok, B.M.

    1993-12-01

    In the PRObabillistic Safety Assessment (PROSA) of radioactive waste in a salt repository one of the nuclide release scenario's is the subrosion scenario. A new subrosion model SUBRECN has been developed. In this model the combined effect of a depth-dependent subrosion, glass dissolution, and salt rise has been taken into account. The subrosion model SUBRECN and the implementation of this model in the German computer program EMOS4 is presented. A new computer program PANTER is derived from EMOS4. PANTER models releases of radionuclides via subrosion from a disposal site in a salt pillar into the biosphere. For uncertainty and sensitivity analyses the new subrosion model Latin Hypercube Sampling has been used for determine the different values for the uncertain parameters. The influence of the uncertainty in the parameters on the dose calculations has been investigated by the following sensitivity techniques: Spearman Rank Correlation Coefficients, Partial Rank Correlation Coefficients, Standardised Rank Regression Coefficients, and the Smirnov Test. (orig./HP)

  10. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  11. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  12. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  13. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  14. The developments and verifications of trace model for IIST LOCA experiments

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, W. X. [Inst. of Nuclear Engineering and Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Wang, J. R.; Lin, H. T. [Inst. of Nuclear Energy Research, Taiwan, No. 1000, Wenhua Rd., Longtan Township, Taoyuan County 32546, Taiwan (China); Shih, C.; Huang, K. C. [Inst. of Nuclear Engineering and Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Dept. of Engineering and System Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China)

    2012-07-01

    The test facility IIST (INER Integral System Test) is a Reduced-Height and Reduced-Pressure (RHRP) integral test loop, which was constructed for the purposes of conducting thermal hydraulic and safety analysis of the Westinghouse three-loop PWR Nuclear Power Plants. The main purpose of this study is to develop and verify TRACE models of IIST through the IIST small break loss of coolant accident (SBLOCA) experiments. First, two different IIST TRACE models which include a pipe-vessel model and a 3-D vessel component model have been built. The steady state and transient calculation results show that both TRACE models have the ability to simulate the related IIST experiments. Comparing with IIST SBLOCA experiment data, the 3-D vessel component model has shown better simulation capabilities so that it has been chosen for all further thermal hydraulic studies. The second step is the sensitivity studies of two phase multiplier and subcooled liquid multiplier in choked flow model; and two correlation constants in CCFL model respectively. As a result, an appropriate set of multipliers and constants can be determined. In summary, a verified IIST TRACE model with 3D vessel component, and fine-tuned choked flow model and CCFL model is established for further studies on IIST experiments in the future. (authors)

  15. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  16. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  17. Regional climate simulations over South America: sensitivity to model physics and to the treatment of lateral boundary conditions using the MM5 model

    Energy Technology Data Exchange (ETDEWEB)

    Solman, Silvina A. [CONICET-UBA, Centro de Investigaciones del Mar y la Atmosfera (CIMA), Buenos Aires (Argentina); Universidad de Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos. Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina); Pessacg, Natalia L. [CONICET-UBA, Centro de Investigaciones del Mar y la Atmosfera (CIMA), Buenos Aires (Argentina)

    2012-01-15

    In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment. (orig.)

  18. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  19. Sensitivity analysis of an individual-based model for simulation of influenza epidemics.

    Directory of Open Access Journals (Sweden)

    Elaine O Nsoesie

    Full Text Available Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty

  20. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  1. Methane emissions from rice paddies. Experiments and modelling

    International Nuclear Information System (INIS)

    Van Bodegom, P.M.

    2000-01-01

    This thesis describes model development and experimentation on the comprehension and prediction of methane (CH4) emissions from rice paddies. The large spatial and temporal variability in CH4 emissions and the dynamic non-linear relationships between processes underlying CH4 emissions impairs the applicability of empirical relations. Mechanistic concepts are therefore starting point of analysis throughout the thesis. The process of CH4 production was investigated by soil slurry incubation experiments at different temperatures and with additions of different electron donors and acceptors. Temperature influenced conversion rates and the competitiveness of microorganisms. The experiments were used to calibrate and validate a mechanistic model on CH4 production that describes competition for acetate and H2/CO2, inhibition effects and chemolithotrophic reactions. The redox sequence leading eventually to CH4 production was well predicted by the model, calibrating only the maximum conversion rates. Gas transport through paddy soil and rice plants was quantified by experiments in which the transport of SF6 was monitored continuously by photoacoustics. A mechanistic model on gas transport in a flooded rice system based on diffusion equations was validated by these experiments and could explain why most gases are released via plant mediated transport. Variability in root distribution led to highly variable gas transport. Experiments showed that CH4 oxidation in the rice rhizosphere was oxygen (O2) limited. Rice rhizospheric O2 consumption was dominated by chemical iron oxidation, and heterotrophic and methanotrophic respiration. The most abundant methanotrophs and heterotrophs were isolated and kinetically characterised. Based upon these experiments it was hypothesised that CH4 oxidation mainly occurred at microaerophilic, low acetate conditions not very close to the root surface. A mechanistic rhizosphere model that combined production and consumption of O2, carbon and iron

  2. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, T. (Yale Univ., New Haven, CT (United States))

    1994-06-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.

  3. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  4. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  5. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    International Nuclear Information System (INIS)

    Cassardo, C.; Carena, E.; Longhetto, A.

    1998-01-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type

  6. Superconducting gravity gradiometer for sensitive gravity measurements. II. Experiment

    International Nuclear Information System (INIS)

    Chan, H.A.; Moody, M.V.; Paik, H.J.

    1987-01-01

    A sensitive superconducting gravity gradiometer has been constructed and tested. Coupling to gravity signals is obtained by having two superconducting proof masses modulate magnetic fields produced by persistent currents. The induced electrical currents are differenced by a passive superconducting circuit coupled to a superconducting quantum interference device. The experimental behavior of this device has been shown to follow the theoretical model closely in both signal transfer and noise characteristics. While its intrinsic noise level is shown to be 0.07 E Hz/sup -1/2/ (1 Eequivalent10/sup -9/ sec/sup -2/), the actual performance of the gravity gradiometer on a passive platform has been limited to 0.3--0.7 E Hz/sup -1/2/ due to its coupling to the environmental noise. The detailed structure of this excess noise is understood in terms of an analytical error model of the instrument. The calibration of the gradiometer has been obtained by two independent methods: by applying a linear acceleration and a gravity signal in two different operational modes of the instrument. This device has been successfully operated as a detector in a new null experiment for the gravitational inverse-square law. In this paper we report the design, fabrication, and detailed test results of the superconducting gravity gradiometer. We also present additional theoretical analyses which predict the specific dynamic behavior of the gradiometer and of the test

  7. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  8. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  9. A ‘just-in-time’ HN(CA)CO experiment for the backbone assignment of large proteins with high sensitivity

    Science.gov (United States)

    Werner-Allen, Jon W.; Jiang, Ling; Zhou, Pei

    2006-07-01

    Among the suite of commonly used backbone experiments, HNCACO presents an unresolved sensitivity limitation due to fast 13CO transverse relaxation and passive 13Cα-13Cβ coupling. Here, we present a high-sensitivity 'just-in-time' (JIT) HN(CA)CO pulse sequence that uniformly refocuses 13Cα-13Cβ coupling while collecting 13CO shifts in real time. Sensitivity comparisons of the 3-D JIT HN(CA)CO, a CT-HMQC-based control, and a HSQC-based control with selective 13Cα inversion pulses were performed using a 2H/13C/15N labeled sample of the 29 kDa HCA II protein at 15 °C. The JIT experiment shows a 42% signal enhancement over the CT-HMQC-based experiment. Compared to the HSQC-based experiment, the JIT experiment is 16% less sensitive for residues experiencing proper 13Cα refocusing and 13Cα-13Cβ decoupling. However, for the remaining residues, the JIT spectrum shows a 106% average sensitivity gain over the HSQC-based experiment. The high-sensitivity JIT HNCACO experiment should be particularly beneficial for studies of large proteins to provide 13CO resonance information regardless of residue type.

  10. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    The problem of derivation and calculation of sensitivity functions for all parameters of the mass balance reduced model of the COST benchmark activated sludge plant is formulated and solved. The sensitivity functions, equations and augmented sensitivity state space models are derived for the cases of ASM1 and UCT ...

  11. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  12. Modelling Nd-isotopes with a coarse resolution ocean circulation model: Sensitivities to model parameters and source/sink distributions

    International Nuclear Information System (INIS)

    Rempfer, Johannes; Stocker, Thomas F.; Joos, Fortunat; Dutay, Jean-Claude; Siddall, Mark

    2011-01-01

    The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleo-oceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143 Nd, and 144 Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and ε Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as ε Nd in good agreement with observations. ε Nd co-varies with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and ε Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects ε Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and ε Nd . Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and ε Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux

  13. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  14. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  15. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  16. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  17. Comparison between the Findings from the TROI Experiments and the Sensitivity Studies by Using the TEXAS-V Code

    International Nuclear Information System (INIS)

    Park, I. K.; Kim, J. H.; Hong, S. W.; Min, B. T.; Hong, S. H.; Song, J. H.; Kim, H. D.

    2006-01-01

    Since a steam explosion may breach the integrity of a reactor vessel and containment, it is one of the most important severe accident issues. So, a lot of experimental and analytical researches on steam explosions have been performed. Although many findings from the steam explosion researches have been obtained, there still exist unsolved issues such as the explosivity of the real core material(corium) and the conversion ratio from the thermal energy to the mechanical energy. TROI experiments were carried out to provide the experimental data for these issues. The TROI experiments were performed with a prototypic material such as ZrO 2 melt and a mixture of ZrO 2 and UO 2 melt (corium). Several steam explosion codes including TEXAS-V had been developed by considering the findings in the past steam explosion experiments. However, some unique findings on steam explosions have been obtained from a series of TROI experiments. These findings should be considered in the application to a reactor safety analysis by using a computational code. In this paper, several findings from TROI experiments are discussed and the sensitivity studies on the TROI experimental parameters were conducted by using TEXAS-V code and TROI-13 test. The comparison between the TROI experimental findings and the results of the sensitivity study might allow us to know which parameter is important and which model is uncertain for steam explosions

  18. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  19. HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment

    Science.gov (United States)

    Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham

    2013-01-01

    Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.

  20. Active drumming experience increases infants' sensitivity to audiovisual synchrony during observed drumming actions

    NARCIS (Netherlands)

    Gerson, S.A.; Schiavio, A.A.R.; Timmers, R.; Hunnius, S.

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this

  1. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  2. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  3. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    Science.gov (United States)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction

  4. A shorter and more specific oral sensitization-based experimental model of food allergy in mice.

    Science.gov (United States)

    Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica

    2012-07-31

    Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  6. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  7. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  8. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  9. Unpacking buyer-seller differences in valuation from experience: A cognitive modeling approach.

    Science.gov (United States)

    Pachur, Thorsten; Scheibehenne, Benjamin

    2017-12-01

    People often indicate a higher price for an object when they own it (i.e., as sellers) than when they do not (i.e., as buyers)-a phenomenon known as the endowment effect. We develop a cognitive modeling approach to formalize, disentangle, and compare alternative psychological accounts (e.g., loss aversion, loss attention, strategic misrepresentation) of such buyer-seller differences in pricing decisions of monetary lotteries. To also be able to test possible buyer-seller differences in memory and learning, we study pricing decisions from experience, obtained with the sampling paradigm, where people learn about a lottery's payoff distribution from sequential sampling. We first formalize different accounts as models within three computational frameworks (reinforcement learning, instance-based learning theory, and cumulative prospect theory), and then fit the models to empirical selling and buying prices. In Study 1 (a reanalysis of published data with hypothetical decisions), models assuming buyer-seller differences in response bias (implementing a strategic-misrepresentation account) performed best; models assuming buyer-seller differences in choice sensitivity or memory (implementing a loss-attention account) generally fared worst. In a new experiment involving incentivized decisions (Study 2), models assuming buyer-seller differences in both outcome sensitivity (as proposed by a loss-aversion account) and response bias performed best. In both Study 1 and 2, the models implemented in cumulative prospect theory performed best. Model recovery studies validated our cognitive modeling approach, showing that the models can be distinguished rather well. In summary, our analysis supports a loss-aversion account of the endowment effect, but also reveals a substantial contribution of simple response bias.

  10. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2018-01-01

    to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...

  11. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  12. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  13. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  14. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  15. The PMIP4 Contribution to CMIP6-Part 4: Scientific Objectives and Experimental Design of the PMIP4-CMIP6 Last Glacial Maximum Experiments and PMIP4 Sensitivity Experiments

    Science.gov (United States)

    Kageyama, Masa; Albani, Samuel; Braconnot, Pascale; Harrison, Sandy P.; Hopcroft, Peter O.; Ivanovic, Ruza F.; Lambert, Fabrice; Marti, Olivier; Peltier, W. Richard; Peterschmitt, Jean-Yves; hide

    2017-01-01

    The Last Glacial Maximum (LGM, 21,000 years ago) is one of the suite of paleoclimate simulations included in the current phase of the Coupled Model Intercomparison Project (CMIP6). It is an interval when insolation was similar to the present, but global ice volume was at a maximum, eustatic sea level was at or close to a minimum, greenhouse gas concentrations were lower, atmospheric aerosol loadings were higher than today, and vegetation and land-surface characteristics were different from today. The LGM has been a focus for the Paleoclimate Modelling Intercomparison Project (PMIP) since its inception, and thus many of the problems that might be associated with simulating such a radically different climate are well documented. The LGM state provides an ideal case study for evaluating climate model performance because the changes in forcing and temperature between the LGM and pre-industrial are of the same order of magnitude as those projected for the end of the 21st century. Thus, the CMIP6 LGM experiment could provide additional information that can be used to constrain estimates of climate sensitivity. The design of the Tier 1 LGM experiment (lgm) includes an assessment of uncertainties in boundary conditions, in particular through the use of different reconstructions of the ice sheets and of the change in dust forcing. Additional (Tier 2) sensitivity experiments have been designed to quantify feedbacks associated with land-surface changes and aerosol loadings, and to isolate the role of individual forcings. Model analysis and evaluation will capitalize on the relative abundance of paleoenvironmental observations and quantitative climate reconstructions already available for the LGM.

  16. The PMIP4 contribution to CMIP6 - Part 4: Scientific objectives and experimental design of the PMIP4-CMIP6 Last Glacial Maximum experiments and PMIP4 sensitivity experiments

    Science.gov (United States)

    Kageyama, Masa; Albani, Samuel; Braconnot, Pascale; Harrison, Sandy P.; Hopcroft, Peter O.; Ivanovic, Ruza F.; Lambert, Fabrice; Marti, Olivier; Peltier, W. Richard; Peterschmitt, Jean-Yves; Roche, Didier M.; Tarasov, Lev; Zhang, Xu; Brady, Esther C.; Haywood, Alan M.; LeGrande, Allegra N.; Lunt, Daniel J.; Mahowald, Natalie M.; Mikolajewicz, Uwe; Nisancioglu, Kerim H.; Otto-Bliesner, Bette L.; Renssen, Hans; Tomas, Robert A.; Zhang, Qiong; Abe-Ouchi, Ayako; Bartlein, Patrick J.; Cao, Jian; Li, Qiang; Lohmann, Gerrit; Ohgaito, Rumi; Shi, Xiaoxu; Volodin, Evgeny; Yoshida, Kohei; Zhang, Xiao; Zheng, Weipeng

    2017-11-01

    The Last Glacial Maximum (LGM, 21 000 years ago) is one of the suite of paleoclimate simulations included in the current phase of the Coupled Model Intercomparison Project (CMIP6). It is an interval when insolation was similar to the present, but global ice volume was at a maximum, eustatic sea level was at or close to a minimum, greenhouse gas concentrations were lower, atmospheric aerosol loadings were higher than today, and vegetation and land-surface characteristics were different from today. The LGM has been a focus for the Paleoclimate Modelling Intercomparison Project (PMIP) since its inception, and thus many of the problems that might be associated with simulating such a radically different climate are well documented. The LGM state provides an ideal case study for evaluating climate model performance because the changes in forcing and temperature between the LGM and pre-industrial are of the same order of magnitude as those projected for the end of the 21st century. Thus, the CMIP6 LGM experiment could provide additional information that can be used to constrain estimates of climate sensitivity. The design of the Tier 1 LGM experiment (lgm) includes an assessment of uncertainties in boundary conditions, in particular through the use of different reconstructions of the ice sheets and of the change in dust forcing. Additional (Tier 2) sensitivity experiments have been designed to quantify feedbacks associated with land-surface changes and aerosol loadings, and to isolate the role of individual forcings. Model analysis and evaluation will capitalize on the relative abundance of paleoenvironmental observations and quantitative climate reconstructions already available for the LGM.

  17. Design of laser-generated shockwave experiments. An approach using analytic models

    International Nuclear Information System (INIS)

    Lee, Y.T.; Trainor, R.J.

    1980-01-01

    Two of the target-physics phenomena which must be understood before a clean experiment can be confidently performed are preheating due to suprathermal electrons and shock decay due to a shock-rarefaction interaction. Simple analytic models are described for these two processes and the predictions of these models are compared with those of the LASNEX fluid physics code. We have approached this work not with the view of surpassing or even approaching the reliability of the code calculations, but rather with the aim of providing simple models which may be used for quick parameter-sensitivity evaluations, while providing physical insight into the problems

  18. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  19. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    Science.gov (United States)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  20. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  1. Enhancing collaborative intrusion detection networks against insider attacks using supervised intrusion sensitivity-based trust management model

    DEFF Research Database (Denmark)

    Li, Wenjuan; Meng, Weizhi; Kwok, Lam-For

    2017-01-01

    To defend against complex attacks, collaborative intrusion detection networks (CIDNs) have been developed to enhance the detection accuracy, which enable an IDS to collect information and learn experience from others. However, this kind of networks is vulnerable to malicious nodes which are utili......To defend against complex attacks, collaborative intrusion detection networks (CIDNs) have been developed to enhance the detection accuracy, which enable an IDS to collect information and learn experience from others. However, this kind of networks is vulnerable to malicious nodes which...... are utilized by insider attacks (e.g., betrayal attacks). In our previous research, we developed a notion of intrusion sensitivity and identified that it can help improve the detection of insider attacks, whereas it is still a challenge for these nodes to automatically assign the values. In this article, we...... of intrusion sensitivity based on expert knowledge. In the evaluation, we compare the performance of three different supervised classifiers in assigning sensitivity values and investigate our trust model under different attack scenarios and in a real wireless sensor network. Experimental results indicate...

  2. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  3. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  4. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  5. The use of graph theory in the sensitivity analysis of the model output: a second order screening method

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Braddock, Roger

    1999-01-01

    Sensitivity analysis screening methods aim to isolate the most important factors in experiments involving a large number of significant factors and interactions. This paper extends the one-factor-at-a-time screening method proposed by Morris. The new method, in addition to the 'overall' sensitivity measures already provided by the traditional Morris method, offers estimates of the two-factor interaction effects. The number of model evaluations required is O(k 2 ), where k is the number of model input factors. The efficient sampling strategy in the parameter space is based on concepts of graph theory and on the solution of the 'handcuffed prisoner problem'

  6. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    1994-01-01

    The work done on this project focused on two LAMPF experiments. The MEGA experiment is a high-sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → eγ]/[μ eν μ ν e ] ∼ 10 -13 , will be over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → eν μ ν e to test the predictions V-A theory of weak interactions. In this experiment the uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value. The detectors are operational, and data taking has begun

  7. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  8. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    Science.gov (United States)

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  9. The sensitivity of the ESA DELTA model

    Science.gov (United States)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  10. Sensitivity of a complex urban air quality model to input data

    International Nuclear Information System (INIS)

    Seigneur, C.; Tesche, T.W.; Roth, P.M.; Reid, L.E.

    1981-01-01

    In recent years, urban-scale photochemical simulation models have been developed that are of practical value for predicting air quality and analyzing the impacts of alternative emission control strategies. Although the performance of some urban-scale models appears to be acceptable, the demanding data requirements of such models have prompted concern about the costs of data acquistion, which might be high enough to preclude use of photochemical models for many urban areas. To explore this issue, sensitivity studies with the Systems Applications, Inc. (SAI) Airshed Model, a grid-based time-dependent photochemical dispersion model, have been carried out for the Los Angeles basin. Reductions in the amount and quality of meteorological, air quality and emission data, as well as modifications of the model gridded structure, have been analyzed. This paper presents and interprets the results of 22 sensitivity studies. A sensitivity-uncertainty index is defined to rank input data needs for an urban photochemical model. The index takes into account the sensitivity of model predictions to the amount of input data, the costs of data acquistion, and the uncertainties in the air quality model input variables. The results of these sensitivity studies are considered in light of the limitations of specific attributes of the Los Angeles basin and of the modeling conditions (e.g., choice of wind model, length of simulation time). The extent to which the results may be applied to other urban areas also is discussed

  11. The PIENU experiment at TRIUMF : a sensitive probe for new physics

    International Nuclear Information System (INIS)

    Malbrunot, Chloe; Bryman, D A; Hurst, C; Aguilar-Arevalo, A A; Aoki, M; Ito, N; Kuno, Y; Blecher, M; Britton, D I; Chen, S; Ding, M; Comfort, J; Doornbos, J; Doria, L; Gumplinger, P; Kurchaninov, L; Hussein, A; Igarashi, Y; Kettell, S; Littenberg, L

    2011-01-01

    Study of rare decays is an important approach for exploring physics beyond the Standard Model (SM). The branching ratio of the helicity suppressed pion decays, R = Γ(π + → e + ν e +π + → e + ν e γ/π + → μ + ν μ + π + → μ + ν μ γ, is one of the most accurately calculated decay process involving hadrons and has so far provided the most stringent test of the hypothesis of electron-muon universality in weak interactions. The branching ratio has been calculated in the SM to better than 0.01% accuracy to be R SM = 1.2353(1) x 10. The PIENU experiment at TRIUMF, which started taking physics data in September 2009, aims to reach an accuracy five times better than the previous experiments, so as to confront the theoretical calculation at the level of ±0.1%. If a deviation from the R SM is found, 'new physics' beyond the SM, at potentially very high mass scales (up to 1000 TeV), could be revealed. Alternatively, sensitive constraints on hypotheses can be obtained for interactions involving pseudoscalar or scalar interactions. So far, 4 million π + → e + ν e events have been accumulated by PIENU. This paper will outline the physics motivations, describe the apparatus and techniques designed to achieve high precision and present the latest results.

  12. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  13. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    Energy Technology Data Exchange (ETDEWEB)

    Pastore, Giovanni, E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Swiler, L.P., E-mail: LPSwile@sandia.gov [Optimization and Uncertainty Quantification, Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185-1318 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Spencer, B.W., E-mail: Benjamin.Spencer@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Luzzi, L., E-mail: Lelio.Luzzi@polimi.it [Politecnico di Milano, Department of Energy, Nuclear Engineering Division, via La Masa 34, I-20156 Milano (Italy); Van Uffelen, P., E-mail: Paul.Van-Uffelen@ec.europa.eu [European Commission, Joint Research Centre, Institute for Transuranium Elements, Hermann-von-Helmholtz-Platz 1, D-76344 Karlsruhe (Germany); Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States)

    2015-01-15

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO{sub 2} single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.

  15. The PMIP4 contribution to CMIP6 – Part 4: Scientific objectives and experimental design of the PMIP4-CMIP6 Last Glacial Maximum experiments and PMIP4 sensitivity experiments

    Directory of Open Access Journals (Sweden)

    M. Kageyama

    2017-11-01

    Full Text Available The Last Glacial Maximum (LGM, 21 000 years ago is one of the suite of paleoclimate simulations included in the current phase of the Coupled Model Intercomparison Project (CMIP6. It is an interval when insolation was similar to the present, but global ice volume was at a maximum, eustatic sea level was at or close to a minimum, greenhouse gas concentrations were lower, atmospheric aerosol loadings were higher than today, and vegetation and land-surface characteristics were different from today. The LGM has been a focus for the Paleoclimate Modelling Intercomparison Project (PMIP since its inception, and thus many of the problems that might be associated with simulating such a radically different climate are well documented. The LGM state provides an ideal case study for evaluating climate model performance because the changes in forcing and temperature between the LGM and pre-industrial are of the same order of magnitude as those projected for the end of the 21st century. Thus, the CMIP6 LGM experiment could provide additional information that can be used to constrain estimates of climate sensitivity. The design of the Tier 1 LGM experiment (lgm includes an assessment of uncertainties in boundary conditions, in particular through the use of different reconstructions of the ice sheets and of the change in dust forcing. Additional (Tier 2 sensitivity experiments have been designed to quantify feedbacks associated with land-surface changes and aerosol loadings, and to isolate the role of individual forcings. Model analysis and evaluation will capitalize on the relative abundance of paleoenvironmental observations and quantitative climate reconstructions already available for the LGM.

  16. Numerical model analysis of the shaded dye-sensitized solar cell module

    International Nuclear Information System (INIS)

    Chen Shuanghong; Weng Jian; Huang Yang; Zhang Changneng; Hu Linhua; Kong Fantai; Wang Lijun; Dai Songyuan

    2010-01-01

    On the basis of a numerical model analysis, the photovoltaic performance of a partially shadowed dye-sensitized solar cell (DSC) module is investigated. In this model, the electron continuity equation and the Butler-Vollmer equation are applied considering electron transfer via the interface of transparent conducting oxide/electrolyte in the shaded DSC. The simulation results based on this model are consistent with experimental results. The influence of shading ratio, connection types and the intensity of irradiance has been analysed according to experiments and numerical simulation. It is found that the performance of the DSC obviously declines with an increase in the shaded area due to electron recombination at the TCO/electrolyte interface and that the output power loss of the shadowed DSC modules in series is much larger than that in parallel due to the 'breakdown' occurring at the TCO/electrolyte interface. The impact of shadow on the DSC performance is stronger with increase in irradiation intensity.

  17. Sensitivity Analysis of b-factor in Microwave Emission Model for Soil Moisture Retrieval: A Case Study for SMAP Mission

    Directory of Open Access Journals (Sweden)

    Dugwon Seo

    2010-05-01

    Full Text Available Sensitivity analysis is critically needed to better understand the microwave emission model for soil moisture retrieval using passive microwave remote sensing data. The vegetation b-factor along with vegetation water content and surface characteristics has significant impact in model prediction. This study evaluates the sensitivity of the b-factor, which is function of vegetation type. The analysis is carried out using Passive and Active L and S-band airborne sensor (PALS and measured field soil moisture from Southern Great Plains experiment (SGP99. The results show that the relative sensitivity of the b-factor is 86% in wet soil condition and 88% in high vegetated condition compared to the sensitivity of the soil moisture. Apparently, the b-factor is found to be more sensitive than the vegetation water content, surface roughness and surface temperature; therefore, the effect of the b-factor is fairly large to the microwave emission in certain conditions. Understanding the dependence of the b-factor on the soil and vegetation is important in studying the soil moisture retrieval algorithm, which can lead to potential improvements in model development for the Soil Moisture Active-Passive (SMAP mission.

  18. Defining and detecting structural sensitivity in biological models: developing a new framework.

    Science.gov (United States)

    Adamson, M W; Morozov, A Yu

    2014-12-01

    When we construct mathematical models to represent biological systems, there is always uncertainty with regards to the model specification--whether with respect to the parameters or to the formulation of model functions. Sometimes choosing two different functions with close shapes in a model can result in substantially different model predictions: a phenomenon known in the literature as structural sensitivity, which is a significant obstacle to improving the predictive power of biological models. In this paper, we revisit the general definition of structural sensitivity, compare several more specific definitions and discuss their usefulness for the construction and analysis of biological models. Then we propose a general approach to reveal structural sensitivity with regards to certain system properties, which considers infinite-dimensional neighbourhoods of the model functions: a far more powerful technique than the conventional approach of varying parameters for a fixed functional form. In particular, we suggest a rigorous method to unearth sensitivity with respect to the local stability of systems' equilibrium points. We present a method for specifying the neighbourhood of a general unknown function with [Formula: see text] inflection points in terms of a finite number of local function properties, and provide a rigorous proof of its completeness. Using this powerful result, we implement our method to explore sensitivity in several well-known multicomponent ecological models and demonstrate the existence of structural sensitivity in these models. Finally, we argue that structural sensitivity is an important intrinsic property of biological models, and a direct consequence of the complexity of the underlying real systems.

  19. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    Science.gov (United States)

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  20. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    Science.gov (United States)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  1. Precipitates/Salts Model Sensitivity Calculation

    Energy Technology Data Exchange (ETDEWEB)

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  2. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  3. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.

    1992-01-01

    The work done on this project was focussed mainly on LAMPF experiment E969 known as the MEGA experiment, a high sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ→eγ]/[μ→e ν μ ν e ] ∼10 -13 is over two orders of magnitude better than previously reported values. The work done on MEGA during this period was divided between that done at Valparaiso University and that done at LAMPF. In addition, some contributions were made to a proposal to the LAMPF PAC to perform a precision measurement of the Michel ρ parameter, described below

  4. Mass hierarchy sensitivity of medium baseline reactor neutrino experiments with multiple detectors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hong-Xin, E-mail: hxwang@iphy.me [Department of Physics, Nanjing University, Nanjing 210093 (China); Zhan, Liang; Li, Yu-Feng; Cao, Guo-Fu [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Chen, Shen-Jian [Department of Physics, Nanjing University, Nanjing 210093 (China)

    2017-05-15

    We report the neutrino mass hierarchy (MH) determination of medium baseline reactor neutrino experiments with multiple detectors, where the sensitivity of measuring the MH can be significantly improved by adding a near detector. Then the impact of the baseline and target mass of the near detector on the combined MH sensitivity has been studied thoroughly. The optimal selections of the baseline and target mass of the near detector are ∼12.5 km and ∼4 kton respectively for a far detector with the target mass of 20 kton and the baseline of 52.5 km. As typical examples of future medium baseline reactor neutrino experiments, the optimal location and target mass of the near detector are selected for the specific configurations of JUNO and RENO-50. Finally, we discuss distinct effects of the reactor antineutrino energy spectrum uncertainty for setups of a single detector and double detectors, which indicate that the spectrum uncertainty can be well constrained in the presence of the near detector.

  5. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Radiation sensitivity of mammalian cells

    International Nuclear Information System (INIS)

    Koch, C.J.

    1985-01-01

    The authors tested various aspects of the so-called ''competition'' model for radiation sensitization/protection. In this model, sensitizers and/or protectors react in first order chemical reactions with radiation-induced target radicals in the cell, producing damage fixation or repair respectively. It is only because of these parallel, first-order competing reactions that they may assign net amounts of damage on the basis of the chemical reactivity of the sentiziers/protectors with the radicals. It might be expected that such a simple model could not explain all aspects of cellular radiosensitivity and this has indeed been found to be true. However, one is able, with the simple model, to pose quite specific questions, and obtain quantitative information with respect to the relative agreement between experiment and theory. Many experiments by several investigators have found areas of disagreement with the competition theory, particularly with respect to the follow items: 1) role of cellular glutathione as the most important endogeneous radiation protector 2) characteristics of various sensitizers which cause them to behave differently from each other 3) methods relating to the quantitative kinetic analysis of experimenal results. This paper addresses these specific areas of disagreement from both an experimental and theoretical basis

  7. Is Convection Sensitive to Model Vertical Resolution and Why?

    Science.gov (United States)

    Xie, S.; Lin, W.; Zhang, G. J.

    2017-12-01

    Model sensitivity to horizontal resolutions has been studied extensively, whereas model sensitivity to vertical resolution is much less explored. In this study, we use the US Department of Energy (DOE)'s Accelerated Climate Modeling for Energy (ACME) atmosphere model to examine the sensitivity of clouds and precipitation to the increase of vertical resolution of the model. We attempt to understand what results in the behavior change (if any) of convective processes represented by the unified shallow and turbulent scheme named CLUBB (Cloud Layers Unified by Binormals) and the Zhang-McFarlane deep convection scheme in ACME. A short-term hindcast approach is used to isolate parameterization issues from the large-scale circulation. The analysis emphasizes on how the change of vertical resolution could affect precipitation partitioning between convective- and grid-scale as well as the vertical profiles of convection-related quantities such as temperature, humidity, clouds, convective heating and drying, and entrainment and detrainment. The goal is to provide physical insight into potential issues with model convective processes associated with the increase of model vertical resolution. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  9. Monte Carlo modeling of a High-Sensitivity MOSFET dosimeter for low- and medium-energy photon sources

    International Nuclear Information System (INIS)

    Wang, Brian; Kim, C.-H.; Xu, X. George

    2004-01-01

    Metal-oxide-semiconductor field effect transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy and diagnostic radiology. While it is difficult to characterize the dosimeter responses for monoenergetic sources by experiments, this paper reports a detailed Monte Carlo simulation model of the High-Sensitivity MOSFET dosimeter using Monte Carlo N-Particle (MCNP) 4C. A dose estimator method was used to calculate the dose in the extremely thin sensitive volume. Efforts were made to validate the MCNP model using three experiments: (1) comparison of the simulated dose with the measurement of a Cs-137 source, (2) comparison of the simulated dose with analytical values, and (3) comparison of the simulated energy dependence with theoretical values. Our simulation results show that the MOSFET dosimeter has a maximum response at about 40 keV of photon energy. The energy dependence curve is also found to agree with the predicted value from theory within statistical uncertainties. The angular dependence study shows that the MOSFET dosimeter has a higher response (about 8%) when photons come from the epoxy side, compared with the kapton side for the Cs-137 source

  10. An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.

    Science.gov (United States)

    Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C

    2013-08-01

    To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.

  11. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  12. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I. (University of California, San Diego); Winey, J. Michael (Washington State University); Gupta, Yogendra Mohan (Washington State University); Lane, J. Matthew D.; Ditmire, Todd (University of Texas at Austin); Quevedo, Hernan J. (University of Texas at Austin)

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  13. Laryngeal sensitivity evaluation and dysphagia: Hospital Sírio-Libanês experience

    Directory of Open Access Journals (Sweden)

    Orlando Parise Junior

    Full Text Available CONTEXT: Laryngeal sensitivity is important in the coordination of swallowing coordination and avoidance of aspiration. OBJECTIVE: To briefly review the physiology of swallowing and report on our experience with laryngeal sensitivity evaluation among patients presenting dysphagia. TYPE OF STUDY: Prospective. SETTING: Endoscopy Department, Hospital Sírio-Libanês. METHODS: Clinical data, endoscopic findings from the larynx and the laryngeal sensitivity, as assessed via the Flexible Endoscopic Evaluation of Swallowing with Sensory Testing (FEESST protocol (using the Pentax AP4000 system, were prospectively studied. The chi-squared and Student t tests were used to compare differences, which were considered significant if p < or = 0.05. RESULTS: The study included 111 patients. A direct association was observed for hyperplasia and hyperemia of the posterior commissure region in relation to globus (p = 0.01 and regurgitation (p = 0.04. Hyperemia of the posterior commissure region had a direct association with sialorrhea (p = 0.03 and an inverse association with xerostomia (p = 0.03. There was a direct association between severe laryngeal sensitivity deficit and previous radiotherapy of the head and neck (p = 0.001. DISCUSSION: These data emphasize the association between proximal gastroesophageal reflux and chronic posterior laryngitis, and suggest that decreased laryngeal sensitivity could be a side effect of radiotherapy. CONCLUSIONS: Even considering that these results are preliminary, the endoscopic findings from laryngoscopy seem to be important in the diagnosis of proximal gastroesophageal reflux. Study of laryngeal sensitivity may have the potential for improving the knowledge and clinical management of dysphagia.

  14. Computer models versus reality: how well do in silico models currently predict the sensitization potential of a substance.

    Science.gov (United States)

    Teubner, Wera; Mehling, Anette; Schuster, Paul Xaver; Guth, Katharina; Worth, Andrew; Burton, Julien; van Ravenzwaay, Bennard; Landsiedel, Robert

    2013-12-01

    National legislations for the assessment of the skin sensitization potential of chemicals are increasingly based on the globally harmonized system (GHS). In this study, experimental data on 55 non-sensitizing and 45 sensitizing chemicals were evaluated according to GHS criteria and used to test the performance of computer (in silico) models for the prediction of skin sensitization. Statistic models (Vega, Case Ultra, TOPKAT), mechanistic models (Toxtree, OECD (Q)SAR toolbox, DEREK) or a hybrid model (TIMES-SS) were evaluated. Between three and nine of the substances evaluated were found in the individual training sets of various models. Mechanism based models performed better than statistical models and gave better predictivities depending on the stringency of the domain definition. Best performance was achieved by TIMES-SS, with a perfect prediction, whereby only 16% of the substances were within its reliability domain. Some models offer modules for potency; however predictions did not correlate well with the GHS sensitization subcategory derived from the experimental data. In conclusion, although mechanistic models can be used to a certain degree under well-defined conditions, at the present, the in silico models are not sufficiently accurate for broad application to predict skin sensitization potentials. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  16. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.

    1993-01-01

    The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay μ → e γ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → e γ]/[μ → ev μ v e ] ∼ 10 -13 , is over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → ev μ v e to test the V-A theory of weak interactions. The uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value

  17. Experiment of solidifying photo sensitive polymer by using UV LED

    Science.gov (United States)

    Kang, Byoung Hun; Shin, Sung Yeol

    2008-11-01

    The development of Nano/Micro manufacturing technologies is growing rapidly and in the same manner, the investments in these areas are increasing. The applications of Nano/Micro technologies are spreading out to semiconductor production technology, biotechnology, environmental engineering, chemical engineering and aerospace. Especially, SLA is one of the most popular applications which is to manufacture 3D shaped microstructure by using UV laser and photo sensitive polymer. To make a high accuracy and precision shape of microstructures that are required from the diverse industrial fields, the information of interaction relationship between the photo resin and the light source is necessary for further research. Experiment of solidifying photo sensitive polymer by using UV LED is the topic of this paper and the purpose of this study is to find out what relationships do the reaction of the resin have in various wavelength, power of the light and time.

  18. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  19. Data on the experiments of temperature-sensitive hydrogels for pH-sensitive drug release and the characterizations of materials

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2018-04-01

    Full Text Available This article contains experimental data on the strain sweep, the calibration curve of drug (doxorubicin, DOX and the characterizations of materials. Data included are related to the research article “Injectable and body temperature sensitive hydrogels based on chitosan and hyaluronic acid for pH sensitive drug release” (Zhang et al., 2017 [1]. The strain sweep experiments were performed on a rotational rheometer. The calibration curves were obtained by analyzing the absorbance of DOX solutions on a UV–vis-NIR spectrometer. Molecular weight (Mw of the hyaluronic acid (HA and chitosan (CS were determined by gel permeation chromatography (GPC. The deacetylation degree of CS was measured by acid base titration.

  20. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring sensitiv......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator...

  1. Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling

    National Research Council Canada - National Science Library

    Sakhanenko, Nikita A; Luger, George F

    2008-01-01

    We propose a context-sensitive probabilistic modeling system (COSMOS) that reasons about a complex, dynamic environment through a series of applications of smaller, knowledge-focused models representing contextually relevant information...

  2. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  3. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  4. Investigating the sensitivity of hurricane intensity and trajectory to sea surface temperatures using the regional model WRF

    Directory of Open Access Journals (Sweden)

    Cevahir Kilic

    2013-12-01

    Full Text Available The influence of sea surface temperature (SST anomalies on the hurricane characteristics are investigated in a set of sensitivity experiments employing the Weather Research and Forecasting (WRF model. The idealised experiments are performed for the case of Hurricane Katrina in 2005. The first set of sensitivity experiments with basin-wide changes of the SST magnitude shows that the intensity goes along with changes in the SST, i.e., an increase in SST leads to an intensification of Katrina. Additionally, the trajectory is shifted to the west (east, with increasing (decreasing SSTs. The main reason is a strengthening of the background flow. The second set of experiments investigates the influence of Loop Current eddies idealised by localised SST anomalies. The intensity of Hurricane Katrina is enhanced with increasing SSTs close to the core of a tropical cyclone. Negative nearby SST anomalies reduce the intensity. The trajectory only changes if positive SST anomalies are located west or north of the hurricane centre. In this case the hurricane is attracted by the SST anomaly which causes an additional moisture source and increased vertical winds.

  5. Structural development and web service based sensitivity analysis of the Biome-BGC MuSo model

    Science.gov (United States)

    Hidy, Dóra; Balogh, János; Churkina, Galina; Haszpra, László; Horváth, Ferenc; Ittzés, Péter; Ittzés, Dóra; Ma, Shaoxiu; Nagy, Zoltán; Pintér, Krisztina; Barcza, Zoltán

    2014-05-01

    -BGC with multi-soil layer). Within the frame of the BioVeL project (http://www.biovel.eu) an open source and domain independent scientific workflow management system (http://www.taverna.org.uk) are used to support 'in silico' experimentation and easy applicability of different models including Biome-BGC MuSo. Workflows can be built upon functionally linked sets of web services like retrieval of meteorological dataset and other parameters; preparation of single run or spatial run model simulation; desk top grid technology based Monte Carlo experiment with parallel processing; model sensitivity analysis, etc. The newly developed, Monte Carlo experiment based sensitivity analysis is described in this study and results are presented about differences in the sensitivity of the original and the developed Biome-BGC model.

  6. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  7. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  8. Diagnosis and Quantification of Climatic Sensitivity of Carbon Fluxes in Ensemble Global Ecosystem Models

    Science.gov (United States)

    Wang, W.; Hashimoto, H.; Milesi, C.; Nemani, R. R.; Myneni, R.

    2011-12-01

    Terrestrial ecosystem models are primary scientific tools to extrapolate our understanding of ecosystem functioning from point observations to global scales as well as from the past climatic conditions into the future. However, no model is nearly perfect and there are often considerable structural uncertainties existing between different models. Ensemble model experiments thus become a mainstream approach in evaluating the current status of global carbon cycle and predicting its future changes. A key task in such applications is to quantify the sensitivity of the simulated carbon fluxes to climate variations and changes. Here we develop a systematic framework to address this question solely by analyzing the inputs and the outputs from the models. The principle of our approach is to assume the long-term (~30 years) average of the inputs/outputs as a quasi-equlibrium of the climate-vegetation system while treat the anomalies of carbon fluxes as responses to climatic disturbances. In this way, the corresponding relationships can be largely linearized and analyzed using conventional time-series techniques. This method is used to characterize three major aspects of the vegetation models that are mostly important to global carbon cycle, namely the primary production, the biomass dynamics, and the ecosystem respiration. We apply this analytical framework to quantify the climatic sensitivity of an ensemble of models including CASA, Biome-BGC, LPJ as well as several other DGVMs from previous studies, all driven by the CRU-NCEP climate dataset. The detailed analysis results are reported in this study.

  9. Mass hierarchy sensitivity of medium baseline reactor neutrino experiments with multiple detectors

    Directory of Open Access Journals (Sweden)

    Hong-Xin Wang

    2017-05-01

    Full Text Available We report the neutrino mass hierarchy (MH determination of medium baseline reactor neutrino experiments with multiple detectors, where the sensitivity of measuring the MH can be significantly improved by adding a near detector. Then the impact of the baseline and target mass of the near detector on the combined MH sensitivity has been studied thoroughly. The optimal selections of the baseline and target mass of the near detector are ∼12.5 km and ∼4 kton respectively for a far detector with the target mass of 20 kton and the baseline of 52.5 km. As typical examples of future medium baseline reactor neutrino experiments, the optimal location and target mass of the near detector are selected for the specific configurations of JUNO and RENO-50. Finally, we discuss distinct effects of the reactor antineutrino energy spectrum uncertainty for setups of a single detector and double detectors, which indicate that the spectrum uncertainty can be well constrained in the presence of the near detector.

  10. A context-sensitive trust model for online social networking

    CSIR Research Space (South Africa)

    Danny, MN

    2016-11-01

    Full Text Available of privacy attacks. In the quest to address this problem, this paper proposes a context-sensitive trust model. The proposed trust model was designed using fuzzy logic theory and implemented using MATLAB. Contrary to existing trust models, the context...

  11. Reproducibility of the heat/capsaicin skin sensitization model in healthy volunteers

    Directory of Open Access Journals (Sweden)

    Cavallone LF

    2013-11-01

    Full Text Available Laura F Cavallone,1 Karen Frey,1 Michael C Montana,1 Jeremy Joyal,1 Karen J Regina,1 Karin L Petersen,2 Robert W Gereau IV11Department of Anesthesiology, Washington University in St Louis, School of Medicine, St Louis, MO, USA; 2California Pacific Medical Center Research Institute, San Francisco, CA, USAIntroduction: Heat/capsaicin skin sensitization is a well-characterized human experimental model to induce hyperalgesia and allodynia. Using this model, gabapentin, among other drugs, was shown to significantly reduce cutaneous hyperalgesia compared to placebo. Since the larger thermal probes used in the original studies to produce heat sensitization are now commercially unavailable, we decided to assess whether previous findings could be replicated with a currently available smaller probe (heated area 9 cm2 versus 12.5–15.7 cm2.Study design and methods: After Institutional Review Board approval, 15 adult healthy volunteers participated in two study sessions, scheduled 1 week apart (Part A. In both sessions, subjects were exposed to the heat/capsaicin cutaneous sensitization model. Areas of hypersensitivity to brush stroke and von Frey (VF filament stimulation were measured at baseline and after rekindling of skin sensitization. Another group of 15 volunteers was exposed to an identical schedule and set of sensitization procedures, but, in each session, received either gabapentin or placebo (Part B.Results: Unlike previous reports, a similar reduction of areas of hyperalgesia was observed in all groups/sessions. Fading of areas of hyperalgesia over time was observed in Part A. In Part B, there was no difference in area reduction after gabapentin compared to placebo.Conclusion: When using smaller thermal probes than originally proposed, modifications of other parameters of sensitization and/or rekindling process may be needed to allow the heat/capsaicin sensitization protocol to be used as initially intended. Standardization and validation of

  12. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  13. Modelling small scale infiltration experiments into bore cores of crystalline rock and break-through curves

    International Nuclear Information System (INIS)

    Hadermann, J.; Jakob, A.

    1987-04-01

    Uranium infiltration experiments for small samples of crystalline rock have been used to model radionuclide transport. The theory, taking into account advection and dispersion in water conducting zones, matrix diffusion out of these, and sorption, contains four independent parameters. It turns out, that the physical variables extracted from those of the best-fit parameters are consistent with values from literature and independent measurements. Moreover, the model results seem to differentiate between various geometries for the water conducting zones. Alpha-autoradiographies corroborate this result. A sensitivity analysis allows for a judgement on parameter dependences. Finally some proposals for further experiments are made. (author)

  14. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  15. Sensitivity Analysis on LOCCW of Westinghouse typed Reactors Considering WOG2000 RCP Seal Leakage Model

    International Nuclear Information System (INIS)

    Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won

    2015-01-01

    In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario

  16. Sensitivity Analysis on LOCCW of Westinghouse typed Reactors Considering WOG2000 RCP Seal Leakage Model

    Energy Technology Data Exchange (ETDEWEB)

    Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario.

  17. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  18. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    Energy Technology Data Exchange (ETDEWEB)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  19. Model Forecast Skill and Sensitivity to Initial Conditions in the Seasonal Sea Ice Outlook

    Science.gov (United States)

    Blanchard-Wrigglesworth, E.; Cullather, R. I.; Wang, W.; Zhang, J.; Bitz, C. M.

    2015-01-01

    We explore the skill of predictions of September Arctic sea ice extent from dynamical models participating in the Sea Ice Outlook (SIO). Forecasts submitted in August, at roughly 2 month lead times, are skillful. However, skill is lower in forecasts submitted to SIO, which began in 2008, than in hindcasts (retrospective forecasts) of the last few decades. The multimodel mean SIO predictions offer slightly higher skill than the single-model SIO predictions, but neither beats a damped persistence forecast at longer than 2 month lead times. The models are largely unsuccessful at predicting each other, indicating a large difference in model physics and/or initial conditions. Motivated by this, we perform an initial condition sensitivity experiment with four SIO models, applying a fixed -1 m perturbation to the initial sea ice thickness. The significant range of the response among the models suggests that different model physics make a significant contribution to forecast uncertainty.

  20. Modelling survival: exposure pattern, species sensitivity and uncertainty.

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G

    2016-07-06

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  1. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  2. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  3. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  4. Comprehensive ecosystem model-experiment synthesis using multiple datasets at two temperate forest free-air CO2 enrichment experiments: model performance and compensating biases

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Anthony P [ORNL; Hanson, Paul J [ORNL; DeKauwe, Martin G [Macquarie University; Medlyn, Belinda [Macquarie University; Zaehle, S [Max Planck Institute for Biogeochemistry; Asao, Shinichi [Colorado State University, Fort Collins; Dietze, Michael [University of Illinois, Urbana-Champaign; Hickler, Thomas [Goethe University, Frankfurt, Germany; Huntinford, Chris [Centre for Ecology and Hydrology, Wallingford, United Kingdom; Iversen, Colleen M [ORNL; Jain, Atul [University of Illinois, Urbana-Champaign; Lomas, Mark [University of Sheffield; Luo, Yiqi [University of Oklahoma; McCarthy, Heather R [Duke University; Parton, William [Colorado State University, Fort Collins; Prentice, I. Collin [Macquarie University; Thornton, Peter E [ORNL; Wang, Shusen [Canada Centre for Remote Sensing (CCRS); Wang, Yingping [CSIRO Marine and Atmospheric Research; Warlind, David [Lund University, Sweden; Weng, Ensheng [University of Oklahoma, Norman; Warren, Jeffrey [ORNL; Woodward, F. Ian [University of Sheffield; Oren, Ram [Duke University; Norby, Richard J [ORNL

    2014-01-01

    Free Air CO2 Enrichment (FACE) experiments provide a remarkable wealth of data to test the sensitivities of terrestrial ecosystem models (TEMs). In this study, a broad set of 11 TEMs were compared to 22 years of data from two contrasting FACE experiments in temperate forests of the south eastern US the evergreen Duke Forest and the deciduous Oak Ridge forest. We evaluated the models' ability to reproduce observed net primary productivity (NPP), transpiration and Leaf Area index (LAI) in ambient CO2 treatments. Encouragingly, many models simulated annual NPP and transpiration within observed uncertainty. Daily transpiration model errors were often related to errors in leaf area phenology and peak LAI. Our analysis demonstrates that the simulation of LAI often drives the simulation of transpiration and hence there is a need to adopt the most appropriate of hypothesis driven methods to simulate and predict LAI. Of the three competing hypotheses determining peak LAI (1) optimisation to maximise carbon export, (2) increasing SLA with canopy depth and (3) the pipe model the pipe model produced LAI closest to the observations. Modelled phenology was either prescribed or based on broader empirical calibrations to climate. In some cases, simulation accuracy was achieved through compensating biases in component variables. For example, NPP accuracy was sometimes achieved with counter-balancing biases in nitrogen use efficiency and nitrogen uptake. Combined analysis of parallel measurements aides the identification of offsetting biases; without which over-confidence in model abilities to predict ecosystem function may emerge, potentially leading to erroneous predictions of change under future climates.

  5. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  6. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  7. Dynamic experiments with high bisphenol-A concentrations modelled with an ASM model extended to include a separate XOC degrading microorganism

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Press-Kristensen, Kåre; Vanrolleghem, P.A.

    2009-01-01

    The perspective of this work is to develop a model, which can be used to better understand and optimize wastewater treatment plants that are able to remove xenobiotic organic compounds (XOCs) in combination with removal of traditional pollutants. Results from dynamic experiments conducted...... with the endocrine disrupting XOC bisphenol-A (BPA) in an activated sludge process with real wastewater were used to hypothesize an ASM-based process model including aerobic growth of a specific BPA-degrading microorganism and sorption of BPA to sludge. A parameter estimation method was developed, which...... simultaneously utilizes steady-state background concentrations and dynamic step response data, as well as conceptual simplifications of the plant configuration. Validation results show that biodegradation of BPA is sensitive to operational conditions before and during the experiment and that the proposed model...

  8. Ensemble of cell survival experiments after ion irradiation for validation of RBE models

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Thomas; Scholz, Uwe; Scholz, Michael [GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Durante, Marco [GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Institut fuer Festkoerperphysik, TU Darmstadt, Darmstadt (Germany)

    2012-07-01

    There is persistent interest in understanding the systematics of the relative biological effectiveness (RBE). Models such as the Local Effect Model (LEM) or the Microdosimetric Kinetic Model have the goal to predict the RBE. For the validation of these models a collection of many in-vitro cell survival experiments is most appropriate. The set-up of an ensemble of in-vitro cell survival data comprising about 850 survival experiments after both ion and photon irradiation is reported. The survival curves have been taken out from publications. The experiments encompass survival curves obtained in different labs, using different ion species from protons to uranium, varying irradiation modalities (shaped or monoenergetic beam), various energies and linear energy transfers, and a whole variety of cell types (human or rodent; normal, mutagenic or tumor; radioresistant or -sensitive). Each cell survival curve has been parameterized by the linear-quadratic model. The photon parameters have been added to the data base to allow to calculate the experimental RBE to any survival level. We report on experimental trends found within the data ensemble. The data will serve as a testing ground for RBE models such as the LEM. Finally, a roadmap for further validation and first model results using the data base in combination with the LEM are presented.

  9. An Animal Model of Trichloroethylene-Induced Skin Sensitization in BALB/c Mice.

    Science.gov (United States)

    Wang, Hui; Zhang, Jia-xiang; Li, Shu-long; Wang, Feng; Zha, Wan-sheng; Shen, Tong; Wu, Changhao; Zhu, Qi-xing

    2015-01-01

    Trichloroethylene (TCE) is a major occupational hazard and environmental contaminant that can cause multisystem disorders in the form of occupational medicamentosa-like dermatitis. Development of dermatitis involves several proinflammatory cytokines, but their role in TCE-mediated dermatitis has not been examined in a well-defined experimental model. In addition, few animal models of TCE sensitization are available, and the current guinea pig model has apparent limitations. This study aimed to establish a model of TCE-induced skin sensitization in BALB/c mice and to examine the role of several key inflammatory cytokines on TCE sensitization. The sensitization rate of dorsal painted group was 38.3%. Skin edema and erythema occurred in TCE-sensitized groups, as seen in 2,4-dinitrochlorobenzene (DNCB) positive control. Trichloroethylene sensitization-positive (dermatitis [+]) group exhibited increased thickness of epidermis, inflammatory cell infiltration, swelling, and necrosis in dermis and around hair follicle, but ear painted group did not show these histological changes. The concentrations of serum proinflammatory cytokines including tumor necrosis factor (TNF)-α, interferon (IFN)-γ, and interleukin (IL)-2 were significantly increased in 24, 48, and 72 hours dermatitis [+] groups treated with TCE and peaked at 72 hours. Deposition of TNF-α, IFN-γ, and IL-2 into the skin tissue was also revealed by immunohistochemistry. We have established a new animal model of skin sensitization induced by repeated TCE stimulations, and we provide the first evidence that key proinflammatory cytokines including TNF-α, IFN-γ, and IL-2 play an important role in the process of TCE sensitization. © The Author(s) 2015.

  10. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  11. Improving axion detection sensitivity in high purity germanium detector based experiments

    Science.gov (United States)

    Xu, Wenqin; Elliott, Steven

    2015-04-01

    Thanks to their excellent energy resolution and low energy threshold, high purity germanium (HPGe) crystals are widely used in low background experiments searching for neutrinoless double beta decay, e.g. the MAJORANA DEMONSTRATOR and the GERDA experiments, and low mass dark matter, e.g. the CDMS and the EDELWEISS experiments. A particularly interesting candidate for low mass dark matter is the axion, which arises from the Peccei-Quinn solution to the strong CP problem and has been searched for in many experiments. Due to axion-photon coupling, the postulated solar axions could coherently convert to photons via the Primakeoff effect in periodic crystal lattices, such as those found in HPGe crystals. The conversion rate depends on the angle between axions and crystal lattices, so the knowledge of HPGe crystal axis is important. In this talk, we will present our efforts to improve the HPGe experimental sensitivity to axions by considering the axis orientations in multiple HPGe crystals simultaneously. We acknowledge the support of the U.S. Department of Energy through the LANL/LDRD Program.

  12. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  13. Use of Data Denial Experiments to Evaluate ESA Forecast Sensitivity Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Zack, J; Natenberg, E J; Knowe, G V; Manobianco, J; Waight, K; Hanley, D; Kamath, C

    2011-09-13

    wind speed and vertical temperature difference. Ideally, the data assimilation scheme used in the experiments would have been based upon an ensemble Kalman filter (EnKF) that was similar to the ESA method used to diagnose the Mid-Colombia Basin sensitivity patterns in the previous studies. However, the use of an EnKF system at high resolution is impractical because of the very high computational cost. Thus, it was decided to use the three-dimensional variational analysis data assimilation that is less computationally intensive and more economically practical for generating operational forecasts. There are two tasks in the current project effort designed to validate the ESA observational system deployment approach in order to move closer to the overall goal: (1) Perform an Observing System Experiment (OSE) using a data denial approach which is the focus of this task and report; and (2) Conduct a set of Observing System Simulation Experiments (OSSE) for the Mid-Colombia basin region. The results of this task are presented in a separate report. The objective of the OSE task involves validating the ESA-MOOA results from the previous sensitivity studies for the Mid-Columbia Basin by testing the impact of existing meteorological tower measurements on the 0- to 6-hour ahead 80-m wind forecasts at the target locations. The testing of the ESA-MOOA method used a combination of data assimilation techniques and data denial experiments to accomplish the task objective.

  14. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  15. Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    Jianbin Guo

    2014-07-01

    Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.

  16. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  17. Sensitivity of the Boundary Plasma to the Plasma-Material Interface

    International Nuclear Information System (INIS)

    Canik, John M.; Tang, X.-Z.

    2017-01-01

    While the sensitivity of the scrape-off layer and divertor plasma to the highly uncertain cross-field transport assumptions is widely recognized, the plasma is also sensitive to the details of the plasma-material interface (PMI) models used as part of comprehensive predictive simulations. Here in this paper, these PMI sensitivities are studied by varying the relevant sub-models within the SOLPS plasma transport code. Two aspects are explored: the sheath model used as a boundary condition in SOLPS, and fast particle reflection rates for ions impinging on a material surface. Both of these have been the study of recent high-fidelity simulation efforts aimed at improving the understanding and prediction of these phenomena. It is found that in both cases quantitative changes to the plasma solution result from modification of the PMI model, with a larger impact in the case of the reflection coefficient variation. Finally, this indicates the necessity to better quantify the uncertainties within the PMI models themselves, and perform thorough sensitivity analysis to propagate these throughout the boundary model; this is especially important for validation against experiment, where the error in the simulation is a critical and less-studied piece of the code-experiment comparison.

  18. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    2008-02-01

    Full Text Available Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity.Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity.When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse.Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  19. Remote sensing of mineral dust aerosol using AERI during the UAE2: A modeling and sensitivity study

    Science.gov (United States)

    Hansell, R. A.; Liou, K. N.; Ou, S. C.; Tsay, S. C.; Ji, Q.; Reid, J. S.

    2008-09-01

    Numerical simulations and sensitivity studies have been performed to assess the potential for using brightness temperature spectra from a ground-based Atmospheric Emitted Radiance Interferometer (AERI) during the United Arab Emirates Unified Aerosol Experiment (UAE2) for detecting/retrieving mineral dust aerosol. A methodology for separating dust from clouds and retrieving the dust IR optical depths was developed by exploiting differences between their spectral absorptive powers in prescribed thermal IR window subbands. Dust microphysical models were constructed using in situ data from the UAE2 and prior field studies while composition was modeled using refractive index data sets for minerals commonly observed around the UAE region including quartz, kaolinite, and calcium carbonate. The T-matrix, finite difference time domain (FDTD), and Lorenz-Mie light scattering programs were employed to calculate the single scattering properties for three dust shapes: oblate spheroids, hexagonal plates, and spheres. We used the Code for High-resolution Accelerated Radiative Transfer with Scattering (CHARTS) radiative transfer program to investigate sensitivity of the modeled AERI spectra to key dust and atmospheric parameters. Sensitivity studies show that characterization of the thermodynamic boundary layer is crucial for accurate AERI dust detection/retrieval. Furthermore, AERI sensitivity to dust optical depth is manifested in the strong subband slope dependence of the window region. Two daytime UAE2 cases were examined to demonstrate the present detection/retrieval technique, and we show that the results compare reasonably well to collocated AERONET Sun photometer/MPLNET micropulse lidar measurements. Finally, sensitivity of the developed methodology to the AERI's estimated MgCdTe detector nonlinearity was evaluated.

  20. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Multi-scale Modeling of the Impact Response of a Strain Rate Sensitive High-Manganese Austenitic Steel

    Directory of Open Access Journals (Sweden)

    Orkun eÖnal

    2014-09-01

    Full Text Available A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress – equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.

  2. Azimuthally sensitive Hanbury Brown-Twiss interferometry measured with the ALICE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Gramling, Johanna Lena

    2011-07-01

    Bose-Einstein correlations of identical pions emitted in high-energy particle collisions provide information about the size of the source region in space-time. If analyzed via HBT Interferometry in several directions with respect to the reaction plane, the shape of the source can be extracted. Hence, HBT Interferometry provides an excellent tool to probe the characteristics of the quark-gluon plasma possibly created in high-energy heavy-ion collisions. This thesis introduces the main theoretical concepts of particle physics, the quark gluon plasma and the technique of HBT interferometry. The ALICE experiment at the CERN Large Hadron Collider (LHC) is explained and the first azimuthallyintegrated results measured in Pb-Pb collisions at √(s{sub NN})=2.76 TeV with ALICE are presented. A detailed two-track resolution study leading to a global pair cut for HBT analyses has been performed, and a framework for the event plane determination has been developed. The results from azimuthally sensitive HBT interferometry are compared to theoretical models and previous measurements at lower energies. Oscillations of the transverse radii in dependence on the pair emission angle are observed, consistent with a source that is extended out-of-plane.

  3. A reactive transport model for mercury fate in contaminated soil--sensitivity analysis.

    Science.gov (United States)

    Leterme, Bertrand; Jacques, Diederik

    2015-11-01

    We present a sensitivity analysis of a reactive transport model of mercury (Hg) fate in contaminated soil systems. The one-dimensional model, presented in Leterme et al. (2014), couples water flow in variably saturated conditions with Hg physico-chemical reactions. The sensitivity of Hg leaching and volatilisation to parameter uncertainty is examined using the elementary effect method. A test case is built using a hypothetical 1-m depth sandy soil and a 50-year time series of daily precipitation and evapotranspiration. Hg anthropogenic contamination is simulated in the topsoil by separately considering three different sources: cinnabar, non-aqueous phase liquid and aqueous mercuric chloride. The model sensitivity to a set of 13 input parameters is assessed, using three different model outputs (volatilized Hg, leached Hg, Hg still present in the contaminated soil horizon). Results show that dissolved organic matter (DOM) concentration in soil solution and the binding constant to DOM thiol groups are critical parameters, as well as parameters related to Hg sorption to humic and fulvic acids in solid organic matter. Initial Hg concentration is also identified as a sensitive parameter. The sensitivity analysis also brings out non-monotonic model behaviour for certain parameters.

  4. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  5. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  6. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  7. Sensitivity of the ATLAS experiment to discover the decay H→ ττ →ll+4ν of the Standard Model Higgs Boson produced in vector boson fusion

    International Nuclear Information System (INIS)

    Schmitz, Martin

    2011-01-01

    A study of the expected sensitivity of the ATLAS experiment to discover the Standard Model Higgs boson produced via vector boson fusion (VBF) and its decay to H→ ττ→ ll+4ν is presented. The study is based on simulated proton-proton collisions at a centre-of-mass energy of 14 TeV. For the first time the discovery potential is evaluated in the presence of additional proton-proton interactions (pile-up) to the process of interest in a complete and consistent way. Special emphasis is placed on the development of background estimation techniques to extract the main background processes Z→ττ and t anti t production using data. The t anti t background is estimated using a control sample selected with the VBF analysis cuts and the inverted b-jet veto. The dominant background process Z→ττ is estimated using Z→μμ events. Replacing the muons of the Z→μμ event with simulated τ-leptons, Z→ττ events are modelled to high precision. For the replacement of the Z boson decay products a dedicated method based on tracks and calorimeter cells is developed. Without pile-up a discovery potential of 3σ to 3.4σ in the mass range 115 GeV H -1 . In the presence of pile-up the signal sensitivity decreases to 1.7σ to 1.9σ mainly caused by the worse resolution of the reconstructed missing transverse energy.

  8. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  9. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  10. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  11. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  12. Projected WIMP Sensitivity of the LUX-ZEPLIN (LZ) Dark Matter Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Akerib, D.S.; et al.

    2018-02-16

    LUX-ZEPLIN (LZ) is a next generation dark matter direct detection experiment that will operate 4850 feet underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota, USA. Using a two-phase xenon detector with an active mass of 7 tonnes, LZ will search primarily for low-energy interactions with Weakly Interacting Massive Particles (WIMPs), which are hypothesized to make up the dark matter in our galactic halo. In this paper, the projected WIMP sensitivity of LZ is presented based on the latest background estimates and simulations of the detector. For a 1000 live day run using a 5.6 tonne fiducial mass, LZ is projected to exclude at 90% confidence level spin-independent WIMP-nucleon cross sections above $1.6 \\times 10^{-48}$ cm$^{2}$ for a 40 $\\mathrm{GeV}/c^{2}$ mass WIMP. Additionally, a $5\\sigma$ discovery potential is projected reaching cross sections below the existing and projected exclusion limits of similar experiments that are currently operating. For spin-dependent WIMP-neutron(-proton) scattering, a sensitivity of $2.7 \\times 10^{-43}$ cm$^{2}$ ($8.1 \\times 10^{-42}$ cm$^{2}$) for a 40 $\\mathrm{GeV}/c^{2}$ mass WIMP is expected. With construction well underway, LZ is on track for underground installation at SURF in 2019 and will start collecting data in 2020.

  13. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  14. Sensitive analysis and modifications to reflood-related constitutive models of RELAP5

    International Nuclear Information System (INIS)

    Li Dong; Liu Xiaojing; Yang Yanhua

    2014-01-01

    Previous system code calculation reveals that the cladding temperature is underestimated and quench front appears too early during reflood process. To find out the parameters shows important effect on the results, sensitive analysis is performed on parameters of constitutive physical models. Based on the phenomenological and theoretical analysis, four parameters are selected: wall to vapor film boiling heat transfer coefficient, wall to liquid film boiling heat transfer coefficient, dry wall interfacial friction coefficient and minimum droplet diameter. In order to improve the reflood simulation ability of RELAP5 code, the film boiling heat transfer model and dry wall interfacial friction model which are corresponding models of those influential parameters are studied. Modifications have been made and installed into RELAP5 code. Six tests of FEBA are simulated by RELAP5 to study the predictability of reflood-related physical models. A dispersed flow film boiling heat transfer (DFFB) model is applied when void fraction is above 0.9. And a factor is multiplied to the post-CHF drag coefficient to fit the experiment better. Finally, the six FEBA tests are calculated again so as to assess the modifications. Better results are obtained which prove the advantage of the modified models. (author)

  15. Optimized Experiment Design for Marine Systems Identification

    DEFF Research Database (Denmark)

    Blanke, M.; Knudsen, Morten

    1999-01-01

    Simulation of maneuvring and design of motion controls for marine systems require non-linear mathematical models, which often have more than one-hundred parameters. Model identification is hence an extremely difficult task. This paper discusses experiment design for marine systems identification...... and proposes a sensitivity approach to solve the practical experiment design problem. The applicability of the sensitivity approach is demonstrated on a large non-linear model of surge, sway, roll and yaw of a ship. The use of the method is illustrated for a container-ship where both model and full-scale tests...

  16. Meeting the Next Generation Science Standards Through "Rediscovered" Climate Model Experiments

    Science.gov (United States)

    Sohl, L. E.; Chandler, M. A.; Zhou, J.

    2013-12-01

    Since the Educational Global Climate Model (EdGCM) Project made its debut in January 2005, over 150 institutions have employed EdGCM software for a variety of uses ranging from short lab exercises to semester-long and year-long thesis projects. The vast majority of these EdGCM adoptees have been at the undergraduate and graduate levels, with few users at the K-12 level. The K-12 instructors who have worked with EdGCM in professional development settings have commented that, although EdGCM can be used to illustrate a number of the Disciplinary Core Ideas and connects to many of the Common Core State Standards across subjects and grade levels, significant hurdles preclude easy integration of EdGCM into their curricula. Time constraints, a scarcity of curriculum materials, and classroom technology are often mentioned as obstacles in providing experiences to younger grade levels in realistic climate modeling research. Given that the NGSS incorporates student performance expectations relating to Earth System Science, and to climate science and the human dimension in particular, we feel that a streamlined version of EdGCM -- one that eliminates the need to run the climate model on limited computing resources, and provides a more guided climate modeling experience -- would be highly beneficial for the K-12 community. This new tool currently under development, called EzGCM, functions through a browser interface, and presents "rediscovery experiments" that allow students to do their own exploration of model output from published climate experiments, or from sensitivity experiments designed to illustrate how climate models as well as the climate system work. The experiments include background information and sample questions, with more extensive notes for instructors so that the instructors can design their own reflection questions or follow-on activities relating to physical or human impacts, as they choose. An added benefit of the EzGCM tool is that, like EdGCM, it helps

  17. Stimulus Sensitivity of a Spiking Neural Network Model

    Science.gov (United States)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  18. Sensitivity of open-water ice growth and ice concentration evolution in a coupled atmosphere-ocean-sea ice model

    Science.gov (United States)

    Shi, Xiaoxu; Lohmann, Gerrit

    2017-09-01

    A coupled atmosphere-ocean-sea ice model is applied to investigate to what degree the area-thickness distribution of new ice formed in open water affects the ice and ocean properties. Two sensitivity experiments are performed which modify the horizontal-to-vertical aspect ratio of open-water ice growth. The resulting changes in the Arctic sea-ice concentration strongly affect the surface albedo, the ocean heat release to the atmosphere, and the sea-ice production. The changes are further amplified through a positive feedback mechanism among the Arctic sea ice, the Atlantic Meridional Overturning Circulation (AMOC), and the surface air temperature in the Arctic, as the Fram Strait sea ice import influences the freshwater budget in the North Atlantic Ocean. Anomalies in sea-ice transport lead to changes in sea surface properties of the North Atlantic and the strength of AMOC. For the Southern Ocean, the most pronounced change is a warming along the Antarctic Circumpolar Current (ACC), owing to the interhemispheric bipolar seasaw linked to AMOC weakening. Another insight of this study lies on the improvement of our climate model. The ocean component FESOM is a newly developed ocean-sea ice model with an unstructured mesh and multi-resolution. We find that the subpolar sea-ice boundary in the Northern Hemisphere can be improved by tuning the process of open-water ice growth, which strongly influences the sea ice concentration in the marginal ice zone, the North Atlantic circulation, salinity and Arctic sea ice volume. Since the distribution of new ice on open water relies on many uncertain parameters and the knowledge of the detailed processes is currently too crude, it is a challenge to implement the processes realistically into models. Based on our sensitivity experiments, we conclude a pronounced uncertainty related to open-water sea ice growth which could significantly affect the climate system sensitivity.

  19. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  20. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  1. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    Science.gov (United States)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  2. Performance Modeling of Mimosa pudica Extract as a Sensitizer for Solar Energy Conversion

    Directory of Open Access Journals (Sweden)

    M. B. Shitta

    2016-01-01

    Full Text Available An organic material is proposed as a sustainable sensitizer and a replacement for the synthetic sensitizer in a dye-sensitized solar cell technology. Using the liquid extract from the leaf of a plant called Mimosa pudica (M. pudica as a sensitizer, the performance characteristics of the extract of M. pudica are investigated. The photo-anode of each of the solar cell sample is passivated with a self-assembly monolayer (SAM from a set of four materials, including alumina, formic acid, gelatine, and oxidized starch. Three sets of five samples of an M. pudica–based solar cell are produced, with the fifth sample used as the control experiment. Each of the solar cell samples has an active area of 0.3848cm2. A two-dimensional finite volume method (FVM is used to model the transport of ions within the monolayer of the solar cell. The performance of the experimentally fabricated solar cells compares qualitatively with the ones obtained from the literature and the simulated solar cells. The highest efficiency of 3% is obtained from the use of the extract as a sensitizer. It is anticipated that the comparison of the performance characteristics with further research on the concentration of M. pudica extract will enhance the development of a reliable and competitive organic solar cell. It is also recommended that further research should be carried out on the concentration of the extract and electrolyte used in this study for a possible improved performance of the cell.

  3. Probing flavor models with {sup 76}Ge-based experiments on neutrinoless double-β decay

    Energy Technology Data Exchange (ETDEWEB)

    Agostini, Matteo [Technische Universitaet Muenchen, Physik Department and Excellence Cluster Universe, Munich (Germany); Gran Sasso Science Institute (INFN), L' Aquila (Italy); Merle, Alexander [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Munich (Germany); Zuber, Kai [Technische Universitaet Dresden, Institute for Nuclear and Particle Physics, Dresden (Germany)

    2016-04-15

    The physics impact of a staged approach for double-β decay experiments based on {sup 76}Ge is studied. The scenario considered relies on realistic time schedules envisioned by the Gerda and the Majorana collaborations, which are jointly working towards the realization of a future larger scale {sup 76}Ge experiment. Intermediate stages of the experiments are conceived to perform quasi background-free measurements, and different data sets can be reliably combined to maximize the physics outcome. The sensitivity for such a global analysis is presented, with focus on how neutrino flavor models can be probed already with preliminary phases of the experiments. The synergy between theory and experiment yields strong benefits for both sides: the model predictions can be used to sensibly plan the experimental stages, and results from intermediate stages can be used to constrain whole groups of theoretical scenarios. This strategy clearly generates added value to the experimental efforts, while at the same time it allows to achieve valuable physics results as early as possible. (orig.)

  4. Sensitivity experiments of a regional climate model to the different convective schemes over Central Africa

    Science.gov (United States)

    Armand J, K. M.

    2017-12-01

    In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.

  5. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  6. Comparison of three labeled silica nanoparticles used as tracers in transport experiments in porous media. Part II: Transport experiments and modeling

    International Nuclear Information System (INIS)

    Vitorge, Elsa; Szenknect, Stéphanie; Martins, Jean M.-F.; Barthès, Véronique; Gaudet, Jean-Paul

    2014-01-01

    Three types of labeled silica nanoparticles were used in transport experiments in saturated sand. The goal of this study was to evaluate both the efficiency of labeling techniques (fluorescence (FITC), metal (Ag(0) core) and radioactivity ( 110m Ag(0) core)) in realistic transport conditions and the reactive transport of silica nanocolloids of variable size and concentration in porous media. Experimental results obtained under contrasted experimental conditions revealed that deposition in sand is controlled by nanoparticles size and ionic strength of the solution. A mathematical model is proposed to quantitatively describe colloid transport. Fluorescent labeling is widely used to study fate of colloids in soils but was the less sensitive one. Ag(0) labeling with ICP-MS detection was found to be very sensitive to measure deposition profiles. Radiolabeled ( 110m Ag(0)) nanoparticles permitted in situ detection. Results obtained with radiolabeled nanoparticles are wholly original and might be used for improving the modeling of deposition and release dynamics. -- Highlights: • Three kinds of labeled nanotracers were used in transport experiments in sand columns. • They were used as surrogates of silica nanoparticles or mineral colloid. • Deposition depending on colloid size and ionic strength was observed and modeled. • Fluorescence labeling had the worse detection limit but was the more convenient. • Radiolabeled nanotracers were detected in situ in a non destructive way. -- Follow the kinetics of transport, deposition and release of silica nanoparticles with suitably labeled nanoparticles

  7. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  8. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  9. Sensitivity Studies on the Influence of Aerosols on Cloud and Precipitation Development Using WRF Mesoscale Model Simulations

    Science.gov (United States)

    Thompson, G.; Eidhammer, T.; Rasmussen, R.

    2011-12-01

    Using the WRF model in simulations of shallow and deep precipitating cloud systems, we investigated the sensitivity to aerosols initiating as cloud condensation and ice nuclei. A global climatological dataset of sulfates, sea salts, and dust was used as input for a control experiment. Sensitivity experiments with significantly more polluted conditions were conducted to analyze the resulting impacts to cloud and precipitation formation. Simulations were performed using the WRF model with explicit treatment of aerosols added to the Thompson et al (2008) bulk microphysics scheme. The modified scheme achieves droplet formation using pre-tabulated CCN activation tables provided by a parcel model. The ice nucleation is parameterized as a function of dust aerosols as well as homogeneous freezing of deliquesced aerosols. The basic processes of aerosol activation and removal by wet scavenging are considered, but aerosol characteristic size or hygroscopicity does not change due to evaporating droplets. In other words, aerosol processing was ignored. Unique aspects of this study include the usage of one to four kilometer grid spacings and the direct parameterization of ice nucleation from aerosols rather than typical temperature and/or supersaturation relationships alone. Initial results from simulations of a deep winter cloud system and its interaction with significant orography show contrasting sensitivities in regions of warm rain versus mixed liquid and ice conditions. The classical view of higher precipitation amounts in relatively clean maritime clouds with fewer but larger droplets is confirmed for regions dominated by the warm-rain process. However, due to complex interactions with the ice phase and snow riming, the simulations revealed the reverse situation in high terrain areas dominated by snow reaching the surface. Results of other cloud systems will be summarized at the conference.

  10. [A high sensitivity search for mu gamma: The mega experiment at LAMPF

    International Nuclear Information System (INIS)

    1990-01-01

    During the past 12 month period the Valparaiso University group has been active on LAMPF experiment 969, known as the MEGA experiment. This experiment is a search for the decay μ -> e γ, a decay which would violate lepton number conservation and which is strictly forbidden by the standard model for electroweak interactions. Previous searches for this decay mode have set limit the present day limit of 4.9 x 10 -11 . The MEGA experiment is designed to test the standard model predictions to one part in 10 +13

  11. Sensitivity of the SHiP experiment to a light scalar particle mixing with the Higgs

    CERN Document Server

    Lanfranchi, Gaia

    2017-01-01

    This conceptual study shows the ultimate sensitivity of the SHiP experiment for the search of a light scalar particle mixing with the Higgs for a dataset corresponding to 5-years of SHiP operation at a nominal intensity of 4 1013 protons on target per second. The sensitivity as a function of the length of the vessel and of its distance from the target as well as a function of the background contamination is also studied.

  12. Sensitivity of system stability to model structure

    Science.gov (United States)

    Hosack, G.R.; Li, H.W.; Rossignol, P.A.

    2009-01-01

    A community is stable, and resilient, if the levels of all community variables can return to the original steady state following a perturbation. The stability properties of a community depend on its structure, which is the network of direct effects (interactions) among the variables within the community. These direct effects form feedback cycles (loops) that determine community stability. Although feedback cycles have an intuitive interpretation, identifying how they form the feedback properties of a particular community can be intractable. Furthermore, determining the role that any specific direct effect plays in the stability of a system is even more daunting. Such information, however, would identify important direct effects for targeted experimental and management manipulation even in complex communities for which quantitative information is lacking. We therefore provide a method that determines the sensitivity of community stability to model structure, and identifies the relative role of particular direct effects, indirect effects, and feedback cycles in determining stability. Structural sensitivities summarize the degree to which each direct effect contributes to stabilizing feedback or destabilizing feedback or both. Structural sensitivities prove useful in identifying ecologically important feedback cycles within the community structure and for detecting direct effects that have strong, or weak, influences on community stability. The approach may guide the development of management intervention and research design. We demonstrate its value with two theoretical models and two empirical examples of different levels of complexity. ?? 2009 Elsevier B.V. All rights reserved.

  13. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    Science.gov (United States)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This

  14. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  15. SOX sensitivity study

    Energy Technology Data Exchange (ETDEWEB)

    Martyn, Johann [Johannes Gutenberg-Universitaet, Mainz (Germany); Collaboration: BOREXINO-Collaboration

    2016-07-01

    To this day most experimental results on neutrino oscillations can be explained in the standard three neutrino model. There are however a few experiments that show anomalous behaviour at a very short baselines. These anomalies can hypothetically be explained with the existence of one or additional more light neutrino states that do not take part in weak interactions and are thus called sterile. Although the anomalies only give a hint that such sterile neutrinos could exist the prospect for physics beyond the standard model is a major motivation to investigate the neutrino oscillations in new very short baseline experiments. The SOX (Short distance Oscillations in BoreXino) experiment will use the Borexino detector and a {sup 144}Ce source to search for sterile neutrinos via the occurance of an oscillation pattern at a baseline of several meters. This talk examines the impact of the Borexino detector systematics on the experimental sensitivity of SOX.

  16. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  17. The Coda of the Transient Response in a Sensitive Cochlea: A Computational Modeling Study.

    Directory of Open Access Journals (Sweden)

    Yizeng Li

    2016-07-01

    Full Text Available In a sensitive cochlea, the basilar membrane response to transient excitation of any kind-normal acoustic or artificial intracochlear excitation-consists of not only a primary impulse but also a coda of delayed secondary responses with varying amplitudes but similar spectral content around the characteristic frequency of the measurement location. The coda, sometimes referred to as echoes or ringing, has been described as a form of local, short term memory which may influence the ability of the auditory system to detect gaps in an acoustic stimulus such as speech. Depending on the individual cochlea, the temporal gap between the primary impulse and the following coda ranges from once to thrice the group delay of the primary impulse (the group delay of the primary impulse is on the order of a few hundred microseconds. The coda is physiologically vulnerable, disappearing when the cochlea is compromised even slightly. The multicomponent sensitive response is not yet completely understood. We use a physiologically-based, mathematical model to investigate (i the generation of the primary impulse response and the dependence of the group delay on the various stimulation methods, (ii the effect of spatial perturbations in the properties of mechanically sensitive ion channels on the generation and separation of delayed secondary responses. The model suggests that the presence of the secondary responses depends on the wavenumber content of a perturbation and the activity level of the cochlea. In addition, the model shows that the varying temporal gaps between adjacent coda seen in experiments depend on the individual profiles of perturbations. Implications for non-invasive cochlear diagnosis are also discussed.

  18. Modeling prescribed burning experiments and assessing the fire impacts on local to regional air quality

    Science.gov (United States)

    Zhou, L.; Baker, K. R.; Napelenok, S. L.; Elleman, R. A.; Urbanski, S. P.

    2016-12-01

    Biomass burning, including wildfires and prescribed burns, strongly impact the global carbon cycle and are of increasing concern due to the potential impacts on ambient air quality. This modelling study focuses on the evolution of carbonaceous compounds during a prescribed burning experiment and assesses the impacts of burning on local to regional air quality. The Community Multiscale Air Quality (CMAQ) model is used to conduct 4 and 2 km grid resolution simulations of prescribed burning experiments in southeast Washington state and western Idaho state in summer 2013. The ground and airborne measurements from the field experiment are used to evaluate the model performance in capturing surface and aloft impacts from the burning events. Phase partitioning of organic compounds in the plume are studied as it is a crucial step towards understanding the fate of carbonaceous compounds. The sensitivities of ambient concentrations and deposition to emissions are conducted for organic carbon, elemental carbon and ozone to estimate the impacts of fire on air quality.

  19. Illustrating sensitivity in environmental fate models using partitioning maps - application to selected contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, T.; Wania, F. [Univ. of Toronto at Scarborough - DPES, Toronto (Canada)

    2004-09-15

    Generic environmental multimedia fate models are important tools in the assessment of the impact of organic pollutants. Because of limited possibilities to evaluate generic models by comparison with measured data and the increasing regulatory use of such models, uncertainties of model input and output are of considerable concern. This led to a demand for sensitivity and uncertainty analyses for the outputs of environmental fate models. Usually, variations of model predictions of the environmental fate of organic contaminants are analyzed for only one or at most a few selected chemicals, even though parameter sensitivity and contribution to uncertainty are widely different for different chemicals. We recently presented a graphical method that allows for the comprehensive investigation of model sensitivity and uncertainty for all neutral organic chemicals simultaneously. This is achieved by defining a two-dimensional hypothetical ''chemical space'' as a function of the equilibrium partition coefficients between air, water, and octanol (K{sub OW}, K{sub AW}, K{sub OA}), and plotting sensitivity and/or uncertainty of a specific model result to each input parameter as function of this chemical space. Here we show how such sensitivity maps can be used to quickly identify the variables with the highest influence on the environmental fate of selected, chlorobenzenes, polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), hexachlorocyclohexanes (HCHs) and brominated flame retardents (BFRs).

  20. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  1. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  2. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    OpenAIRE

    Angelen, J. H.; Lenaerts, J. T. M.; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.; Broeke, M. R.; Meijgaard, E.; Smeets, C. J. P. P.

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6%) at the K-transect (west Greenland) for the period 2004–2009 is...

  3. The 'model omnitron' proposed experiment

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-05-01

    The Model Omitron is a compact tokamak experiment which is designed by the Fusion Engineering Unit of ENEA and CITIF CONSORTIUM. The building of Model Omitron would allow for full testing of Omitron engineering, and partial testing of Omitron physics -at about 1/20 of the cost that has been estimated for the larger parent machine. In particular, due to the unusually large ohmic power densities (up to 100 times the nominal value in the Frascati FTU experiment), in Model Omitron the radial energy flux is reaching values comparable or higher than envisaged of the larger ignition experiments Omitron, Ignitor and Iter. Consequently, conditions are expected to occur at the plasma border in the scrape-off layer of Model Omitron, which are representative of the quoted larger experiments. Moreover, since all this will occur under ohmic heating alone, one will hopefully be able to derive an energy transport model fo the ohmic heating regime that is valid over a range of plasma parameters (in particular, of the temperature parameter) wider than it was possible before. In the Model Omitron experiment, finally - by reducing the plasma current and/or the toroidal field down to, say, 1/3 or 1/4 of the nominal values -additional topics can be tackled, such as: large safety-factor configurations (of interest for improving confinement), large aspect-ratio configurations (of interest for the investigation of advanced concepts in tokamaks), high beta (with RF heating -also of interest for the investigation of advanced concepts in tokamaks), long pulse discharges (of interest for demonstrating stationary conditions in the current profile)

  4. Influence of Ethnic-Related Diversity Experiences on Intercultural Sensitivity of Students at a Public University in Malaysia

    Science.gov (United States)

    Tamam, Ezhar; Abdullah, Ain Nadzimah

    2012-01-01

    In this study, the authors examine the influence of ethnic-related diversity experiences on intercultural sensitivity among Malaysian students at a multiethnic, multicultural and multilingual Malaysian public university. Results reveal a significant differential level of ethnic-related diversity experiences (but not at the level of intercultural…

  5. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  6. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  7. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nicholas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Mueller, Donald E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Powers, Jeffrey J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-08-31

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.

  8. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    International Nuclear Information System (INIS)

    Brown, Nicholas R.; Mueller, Donald E.; Patton, Bruce W.; Powers, Jeffrey J.

    2016-01-01

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 "7LiF-BeF_2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.

  9. Parametric sensitivity of a CFD model concerning the hydrodynamics of trickle-bed reactor (TBR

    Directory of Open Access Journals (Sweden)

    Janecki Daniel

    2016-03-01

    Full Text Available The aim of the present study was to investigate the sensitivity of a multiphase Eulerian CFD model with respect to relations defining drag forces between phases. The mean relative error as well as standard deviation of experimental and computed values of pressure gradient and average liquid holdup were used as validation criteria of the model. Comparative basis for simulations was our own data-base obtained in experiments carried out in a TBR operating at a co-current downward gas and liquid flow. Estimated errors showed that the classical equations of Attou et al. (1999 defining the friction factors Fjk approximate experimental values of hydrodynamic parameters with the best agreement. Taking this into account one can recommend to apply chosen equations in the momentum balances of TBR.

  10. Sensitivity of wildlife habitat models to uncertainties in GIS data

    Science.gov (United States)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  11. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  12. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  13. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  14. Cough reflex sensitivity is increased in the guinea pig model of allergic rhinitis.

    Science.gov (United States)

    Brozmanova, M; Plevkova, J; Tatar, M; Kollarik, M

    2008-12-01

    Increased cough reflex sensitivity is found in patients with allergic rhinitis and may contribute to cough caused by rhinitis. We have reported that cough to citric acid is enhanced in the guinea pig model of allergic rhinitis. Here we address the hypothesis that the cough reflex sensitivity is increased in this model. The data from our previous studies were analyzed for the cough reflex sensitivity. The allergic inflammation in the nose was induced by repeated intranasal instillations of ovalbumin in the ovalbumin-sensitized guinea pigs. Cough was induced by inhalation of doubling concentrations of citric acid (0.05-1.6 M). Cough threshold was defined as the lowest concentration of citric acid causing two coughs (C2, expressed as geometric mean [95% confidence interval]). We found that the cough threshold was reduced in animals with allergic rhinitis. C2 was 0.5 M [0.36-0.71 M] and 0.15 M [0.1-0.23 M] prior and after repeated intranasal instillations of ovalbumin, respectively, Preflex sensitivity. C2 was reduced in animals with allergic rhinitis treated orally with vehicle (0.57 M [0.28-1.1] vs. 0.09 M [0.04-0.2 M], Preflex sensitivity is increased in the guinea pig model of allergic rhinitis. Our results suggest that guinea pig is a suitable model for mechanistic studies of increased cough reflex sensitivity in rhinitis.

  15. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  16. A position sensitive silicon detector for AEgIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy)

    CERN Multimedia

    Gligorova, A

    2014-01-01

    The AEḡIS experiment (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is located at the Antiproton Decelerator (AD) at CERN and studies antimatter. The main goal of the AEḡIS experiment is to carry out the first measurement of the gravitational acceleration for antimatter in Earth’s gravitational field to a 1% relative precision. Such a measurement would test the Weak Equivalence Principle (WEP) of Einstein’s General Relativity. The gravitational acceleration for antihydrogen will be determined using a set of gravity measurement gratings (Moiré deflectometer) and a position sensitive detector. The vertical shift due to gravity of the falling antihydrogen atoms will be detected with a silicon strip detector, where the annihilation of antihydrogen will take place. This poster presents part of the development process of this detector.

  17. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  18. Sensitivity analysis of complex models: Coping with dynamic and static inputs

    International Nuclear Information System (INIS)

    Anstett-Collin, F.; Goffart, J.; Mara, T.; Denis-Vidal, L.

    2015-01-01

    In this paper, we address the issue of conducting a sensitivity analysis of complex models with both static and dynamic uncertain inputs. While several approaches have been proposed to compute the sensitivity indices of the static inputs (i.e. parameters), the one of the dynamic inputs (i.e. stochastic fields) have been rarely addressed. For this purpose, we first treat each dynamic as a Gaussian process. Then, the truncated Karhunen–Loève expansion of each dynamic input is performed. Such an expansion allows to generate independent Gaussian processes from a finite number of independent random variables. Given that a dynamic input is represented by a finite number of random variables, its variance-based sensitivity index is defined by the sensitivity index of this group of variables. Besides, an efficient sampling-based strategy is described to estimate the first-order indices of all the input factors by only using two input samples. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties (static inputs) and the weather data (dynamic inputs) on the energy performance of a real low energy consumption house. - Highlights: • Sensitivity analysis of models with uncertain static and dynamic inputs is performed. • Karhunen–Loève (KL) decomposition of the spatio/temporal inputs is performed. • The influence of the dynamic inputs is studied through the modes of the KL expansion. • The proposed approach is applied to a building energy model. • Impact of weather data and material properties on performance of real house is given

  19. Sensitivity of subject-specific models to errors in musculo-skeletal geometry.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2012-09-21

    Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in musculo-skeletal geometry on subject-specific model results. We performed an extensive sensitivity analysis to quantify the effect of the perturbation of origin, insertion and via points of each of the 56 musculo-tendon parts contained in the model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by only the perturbed musculo-tendon parts and by all the remaining musculo-tendon parts, respectively, during a simulated gait cycle. Results indicated that, for each musculo-tendon part, only two points show a significant sensitivity: its origin, or pseudo-origin, point and its insertion, or pseudo-insertion, point. The most sensitive points belong to those musculo-tendon parts that act as prime movers in the walking movement (insertion point of the Achilles Tendon: LSI=15.56%, OSI=7.17%; origin points of the Rectus Femoris: LSI=13.89%, OSI=2.44%) and as hip stabilizers (insertion points of the Gluteus Medius Anterior: LSI=17.92%, OSI=2.79%; insertion point of the Gluteus Minimus: LSI=21.71%, OSI=2.41%). The proposed priority list provides quantitative information to improve the predictive accuracy of subject-specific musculo-skeletal models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Drought resilience across ecologically dominant species: An experiment-model integration approach

    Science.gov (United States)

    Felton, A. J.; Warren, J.; Ricciuto, D. M.; Smith, M. D.

    2017-12-01

    Poorly understood are the mechanisms contributing to variability in ecosystem recovery following drought. Grasslands of the central U.S. are ecologically and economically important ecosystems, yet are also highly sensitive to drought. Although characteristics of these ecosystems change across gradients of temperature and precipitation, a consistent feature among these systems is the presence of highly abundant, dominant grass species that control biomass production. As a result, the incorporation of these species' traits into terrestrial biosphere models may constrain predictions amid increases in climatic variability. Here we report the results of a modeling-experiment (MODEX) research approach. We investigated the physiological, morphological and growth responses of the dominant grass species from each of the four major grasslands of the central U.S. (ranging from tallgrass prairie to desert grassland) following severe drought. Despite significant differences in baseline values, full recovery in leaf physiological function was evident across species, of which was consistently driven by the production of new leaves. Further, recovery in whole-plant carbon uptake tended to be driven by shifts in allocation from belowground to aboveground structures. However, there was clear variability among species in the magnitude of this dynamic as well as the relative allocation to stem versus leaf production. As a result, all species harbored the physiological capacity to recover from drought, yet we posit that variability in the recovery of whole-plant carbon uptake to be more strongly driven by variability in the sensitivity of species' morphology to soil moisture increases. The next step of this project will be to incorporate these and other existing data on these species and ecosystems into the community land model in an effort to test the sensitivity of this model to these data.

  1. 3-D thermo-mechanical laboratory modeling of plate-tectonics: modeling scheme, technique and first experiments

    Directory of Open Access Journals (Sweden)

    D. Boutelier

    2011-05-01

    Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.

  2. Stability and Sensitive Analysis of a Model with Delay Quorum Sensing

    Directory of Open Access Journals (Sweden)

    Zhonghua Zhang

    2015-01-01

    Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.

  3. Spectral envelope sensitivity of musical instrument sounds.

    Science.gov (United States)

    Gunawan, David; Sen, D

    2008-01-01

    It is well known that the spectral envelope is a perceptually salient attribute in musical instrument timbre perception. While a number of studies have explored discrimination thresholds for changes to the spectral envelope, the question of how sensitivity varies as a function of center frequency and bandwidth for musical instruments has yet to be addressed. In this paper a two-alternative forced-choice experiment was conducted to observe perceptual sensitivity to modifications made on trumpet, clarinet and viola sounds. The experiment involved attenuating 14 frequency bands for each instrument in order to determine discrimination thresholds as a function of center frequency and bandwidth. The results indicate that perceptual sensitivity is governed by the first few harmonics and sensitivity does not improve when extending the bandwidth any higher. However, sensitivity was found to decrease if changes were made only to the higher frequencies and continued to decrease as the distorted bandwidth was widened. The results are analyzed and discussed with respect to two other spectral envelope discrimination studies in the literature as well as what is predicted from a psychoacoustic model.

  4. Decisions reduce sensitivity to subsequent information.

    Science.gov (United States)

    Bronfman, Zohar Z; Brezis, Noam; Moran, Rani; Tsetsos, Konstantinos; Donner, Tobias; Usher, Marius

    2015-07-07

    Behavioural studies over half a century indicate that making categorical choices alters beliefs about the state of the world. People seem biased to confirm previous choices, and to suppress contradicting information. These choice-dependent biases imply a fundamental bound of human rationality. However, it remains unclear whether these effects extend to lower level decisions, and only little is known about the computational mechanisms underlying them. Building on the framework of sequential-sampling models of decision-making, we developed novel psychophysical protocols that enable us to dissect quantitatively how choices affect the way decision-makers accumulate additional noisy evidence. We find robust choice-induced biases in the accumulation of abstract numerical (experiment 1) and low-level perceptual (experiment 2) evidence. These biases deteriorate estimations of the mean value of the numerical sequence (experiment 1) and reduce the likelihood to revise decisions (experiment 2). Computational modelling reveals that choices trigger a reduction of sensitivity to subsequent evidence via multiplicative gain modulation, rather than shifting the decision variable towards the chosen alternative in an additive fashion. Our results thus show that categorical choices alter the evidence accumulation mechanism itself, rather than just its outcome, rendering the decision-maker less sensitive to new information. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Oral sensitization to food proteins: A Brown Norway rat model

    NARCIS (Netherlands)

    Knippels, L.M.J.; Penninks, A.H.; Spanhaak, S.; Houben, G.F.

    1998-01-01

    Background: Although several in vivo antigenicity assays using parenteral immunization are operational, no adequate enteral sensitization models are available to study food allergy and allergenicity of food proteins. Objective: This paper describes the development of an enteral model for food

  6. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  7. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  8. Sensitivity of corneal biomechanical and optical behavior to material parameters using design of experiments method.

    Science.gov (United States)

    Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung

    2018-02-01

    The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.

  9. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  10. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...

  11. Model of urban water management towards water sensitive city: a literature review

    Science.gov (United States)

    Maftuhah, D. I.; Anityasari, M.; Sholihah, M.

    2018-04-01

    Nowadays, many cities are facing with complex issues such as climate change, social, economic, culture, and environmental problems, especially urban water. In other words, the city has to struggle with the challenge to make sure its sustainability in all aspects. This research focuses on how to ensure the city sustainability and resilience on urban water management. Many research were not only conducted in urban water management, but also in sustainability itself. Moreover, water sustainability shifts from urban water management into water sensitive city. This transition needs comprehensive aspects such as social, institutional dynamics, technical innovation, and local contents. Some literatures about model of urban water management and the transition towards water sensitivity had been reviewed in this study. This study proposed discussion about model of urban water management and the transition towards water sensitive city. Research findings suggest that there are many different models developed in urban water management, but they are not comprehensive yet and only few studies discuss about the transition towards water sensitive and resilience city. The drawbacks of previous research can identify and fulfill the gap of this study. Therefore, the paper contributes a general framework for the urban water management modelling studies.

  12. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  13. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  14. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  15. Metals Are Important Contact Sensitizers: An Experience from Lithuania

    Directory of Open Access Journals (Sweden)

    Kotryna Linauskienė

    2017-01-01

    Full Text Available Background. Metals are very frequent sensitizers causing contact allergy and allergic contact dermatitis worldwide; up-to-date data based on patch test results has proved useful for the identification of a problem. Objectives. In this retrospective study prevalence of contact allergy to metals (nickel, chromium, palladium, gold, cobalt, and titanium in Lithuania is analysed. Patients/Methods. Clinical and patch test data of 546 patients patch tested in 2014–2016, in Vilnius University Hospital Santariskiu Klinikos, was analysed and compared with previously published data. Results. Almost third of tested patients (29.56% were sensitized to nickel. Younger women were more often sensitized to nickel than older ones (36% versus 22.8%, p=0.0011. Women were significantly more often sensitized to nickel than men (33% versus 6.1%, p<0.0001. Younger patients were more often sensitized to cobalt (11.6% versus 5.7%, p=0.0183. Sensitization to cobalt was related to sensitization to nickel (p<0.0001. Face dermatitis and oral discomfort were related to gold allergy (28% versus 6.9% dermatitis of other parts, p<0.0001. Older patients were patch test positive to gold(I sodium thiosulfate statistically significantly more often than younger ones (44.44% versus 21.21%, p=0.0281. Conclusions. Nickel, gold, cobalt, and chromium are leading metal sensitizers in Lithuania. Cobalt sensitization is often accompanied by sensitization to nickel. Sensitivity rate to palladium and nickel indicates possible cross-reactivity. No sensitization to titanium was found.

  16. Studying the physics potential of long-baseline experiments in terms of new sensitivity parameters

    International Nuclear Information System (INIS)

    Singh, Mandip

    2016-01-01

    We investigate physics opportunities to constraint the leptonic CP-violation phase δ_C_P through numerical analysis of working neutrino oscillation probability parameters, in the context of long-baseline experiments. Numerical analysis of two parameters, the “transition probability δ_C_P phase sensitivity parameter (A"M)” and the “CP-violation probability δ_C_P phase sensitivity parameter (A"C"P),” as functions of beam energy and/or baseline have been carried out. It is an elegant technique to broadly analyze different experiments to constrain the δ_C_P phase and also to investigate the mass hierarchy in the leptonic sector. Positive and negative values of the parameter A"C"P, corresponding to either hierarchy in the specific beam energy ranges, could be a very promising way to explore the mass hierarchy and δ_C_P phase. The keys to more robust bounds on the δ_C_P phase are improvements of the involved detection techniques to explore lower energies and relatively long baseline regions with better experimental accuracy.

  17. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  18. GTS-LCS, in-situ experiment 2. Modeling of tracer test 09-03

    International Nuclear Information System (INIS)

    Manette, M.; Saaltink, M.W.; Soler, J.M.

    2015-02-01

    Within the framework of the GTS-LCS project (Grimsel Test Site - Long-Term Cement Studies), an in-situ experiment lasting about 5 years was started in 2009 to study water-cement-rock interactions in a fractured granite. Prior to the experiment, a tracer test was performed to characterize the initial flow and transport properties of the rock around the experimental boreholes. This study reports on the model interpretation of tracer test 09-03. The calculations were performed by means of a two-dimensional model (homogeneous fracture plane including 3 boreholes) using the Retraso-CodeBright software package. In the tracer test, Grimsel groundwater containing the tracer (uranine) was circulated in the emplacement borehole during 43 days (zero injection flow rate). Circulation continued without tracer afterwards. Water was extracted at the observation and extraction boreholes. Results from a model sensitivity analysis comparing model results with measured tracer concentrations showed 3 cases where the evolution of tracer concentrations in the 3 different boreholes was satisfactory. In these cases a low-permeability skin affected the emplacement and observation boreholes. No skin appeared to affect the extraction borehole. The background hydraulic gradient seems to have no effect on the results of the tracer test. These results will be applied in the calculation of the initial flow field for the reactive transport phase of in-situ experiment 2 (interaction between pre-hardened cement and fractured granite at Grimsel). (orig.)

  19. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  20. First evidence that drugs of abuse produce behavioral sensitization and cross-sensitization in planarians

    Science.gov (United States)

    Rawls, Scott M.; Patil, Tavni; Yuvasheva, Ekaternia; Raffa, Robert B.

    2010-01-01

    Behavioral sensitization in mammals, including humans, is sensitive to factors such as administration route, testing environment, and pharmacokinetic confounds, unrelated to the drugs themselves, that are difficult to eliminate. Simpler animals less susceptible to these confounding influences may be advantageous substitutes for studying sensitization. We tested this hypothesis by determining if planarians display sensitization and cross-sensitization to cocaine and glutamate. Planarian hyperactivity was quantified as the number of C-like hyperkinesias during a 1-min drug exposure. Planarians exposed initially to cocaine (or glutamate) on day 1 were challenged with cocaine (or glutamate) after 2 or 6 days of abstinence. Acute cocaine or glutamate produced concentration-related hyperactivity. Cocaine or glutamate challenge after 2 and 6 days of abstinence enhanced the hyperactivity, indicating the substances produced planarian behavioral sensitization (pBS). Cross-sensitization experiments showed that cocaine produced greater hyperactivity in planarians previously exposed to glutamate than in glutamate-naïve planarians, and vice versa. Behavioral responses were pharmacologically selective because neither scopolamine nor caffeine produced pBS despite causing hyperactivity after initial administration, and acute GABA did not cause hyperactivity. Demonstration of pharmacologically-selective behavioral sensitization in planarians suggests these flatworms represent a sensitive in vivo model to study cocaine behavioral sensitization and to screen potential abuse-deterrent therapeutics. PMID:20512030

  1. How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?

    Science.gov (United States)

    Wen, Jessica; Koo, Soh Myoung; Lape, Nancy

    2018-02-01

    While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu; Stenchikov, Georgiy L.

    2016-01-01

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  3. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu

    2016-06-13

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  4. Assessing the water balance in the Sahel : Impact of small scale rainfall variability on runoff. Part 2 : Idealized modeling of runoff sensitivity

    OpenAIRE

    Vischel, Théo; Lebel, Thierry

    2007-01-01

    As in many other semi-arid regions in the world, the Sahelian hydrological environment is characterized by a mosaic of small endoreic catchments with dry soil surface conditions producing mostly Hortonian runoff. Using an SCS-type event based rainfall-runoff model, an idealized modeling experiment of a Sahelian environment is set up to study the sensitivity of runoff to small scale rainfall variability. A set of 548 observed rain events is used to force the hydrological model to study the sen...

  5. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  6. Sensitivity analysis of Repast computational ecology models with R/Repast.

    Science.gov (United States)

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  7. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    Science.gov (United States)

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation

  8. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... Sensitivity study of reduced models of the activated sludge process, for the purposes of parameter estimation and process optimisation: Benchmark process with ASM1 and UCT reduced biological models. S du Plessis and R Tzoneva*. Department of Electrical Engineering, Cape Peninsula University of ...

  9. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  10. NRSF-dependent epigenetic mechanisms contribute to programming of stress-sensitive neurons by neonatal experience, promoting resilience.

    Science.gov (United States)

    Singh-Taylor, A; Molet, J; Jiang, S; Korosi, A; Bolton, J L; Noam, Y; Simeone, K; Cope, J; Chen, Y; Mortazavi, A; Baram, T Z

    2018-03-01

    Resilience to stress-related emotional disorders is governed in part by early-life experiences. Here we demonstrate experience-dependent re-programming of stress-sensitive hypothalamic neurons, which takes place through modification of neuronal gene expression via epigenetic mechanisms. Specifically, we found that augmented maternal care reduced glutamatergic synapses onto stress-sensitive hypothalamic neurons and repressed expression of the stress-responsive gene, Crh. In hypothalamus in vitro, reduced glutamatergic neurotransmission recapitulated the repressive effects of augmented maternal care on Crh, and this required recruitment of the transcriptional repressor repressor element-1 silencing transcription factor/neuron restrictive silencing factor (NRSF). Increased NRSF binding to chromatin was accompanied by sequential repressive epigenetic changes which outlasted NRSF binding. chromatin immunoprecipitation-seq analyses of NRSF targets identified gene networks that, in addition to Crh, likely contributed to the augmented care-induced phenotype, including diminished depression-like and anxiety-like behaviors. Together, we believe these findings provide the first causal link between enriched neonatal experience, synaptic refinement and induction of epigenetic processes within specific neurons. They uncover a novel mechanistic pathway from neonatal environment to emotional resilience.

  11. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    Science.gov (United States)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a

  12. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  13. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  14. Tuning the climate sensitivity of a global model to match 20th Century warming

    Science.gov (United States)

    Mauritsen, T.; Roeckner, E.

    2015-12-01

    A climate models ability to reproduce observed historical warming is sometimes viewed as a measure of quality. Yet, for practical reasons historical warming cannot be considered a purely empirical result of the modelling efforts because the desired result is known in advance and so is a potential target of tuning. Here we explain how the latest edition of the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1.2) atmospheric model (ECHAM6.3) had its climate sensitivity systematically tuned to about 3 K; the MPI model to be used during CMIP6. This was deliberately done in order to improve the match to observed 20th Century warming over the previous model generation (MPI-ESM, ECHAM6.1) which warmed too much and had a sensitivity of 3.5 K. In the process we identified several controls on model cloud feedback that confirm recently proposed hypotheses concerning trade-wind cumulus and high-latitude mixed-phase clouds. We then evaluate the model fidelity with centennial global warming and discuss the relative importance of climate sensitivity, forcing and ocean heat uptake efficiency in determining the response as well as possible systematic biases. The activity of targeting historical warming during model development is polarizing the modeling community with 35 percent of modelers stating that 20th Century warming was rated very important to decisive, whereas 30 percent would not consider it at all. Likewise, opinions diverge as to which measures are legitimate means for improving the model match to observed warming. These results are from a survey conducted in conjunction with the first WCRP Workshop on Model Tuning in fall 2014 answered by 23 modelers. We argue that tuning or constructing models to match observed warming to some extent is practically unavoidable, and as such, in many cases might as well be done explicitly. For modeling groups that have the capability to tune both their aerosol forcing and climate sensitivity there is now a unique

  15. Complete Sensitivity/Uncertainty Analysis of LR-0 Reactor Experiments with MSRE FLiBe Salt and Perform Comparison with Molten Salt Cooled and Molten Salt Fueled Reactor Models

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nicholas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Powers, Jeffrey J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mueller, Don [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    In September 2016, reactor physics measurements were conducted at Research Centre Rez (RC Rez) using the FLiBe (2 7LiF + BeF2) salt from the Molten Salt Reactor Experiment (MSRE) in the LR-0 low power nuclear reactor. These experiments were intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems using FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL), in collaboration with RC Rez, performed sensitivity/uncertainty (S/U) analyses of these experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objectives of these analyses were (1) to identify potential sources of bias in fluoride salt-cooled and salt-fueled reactor simulations resulting from cross section uncertainties, and (2) to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a final report on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. In the future, these S/U analyses could be used to inform the design of additional FLiBe-based experiments using the salt from MSRE. The key finding of this work is that, for both solid and liquid fueled fluoride salt reactors, radiative capture in 7Li is the most significant contributor to potential bias in neutronics calculations within the FLiBe salt.

  16. INFLUENCE OF MODIFIED BIOFLAVONOIDS UPON EFFECTOR LYMPHOCYTES IN MURINE MODEL OF CONTACT SENSITIVITY

    Directory of Open Access Journals (Sweden)

    D. Z. Albegova

    2015-01-01

    Full Text Available Contact sensitivity reaction (CSR to 2,4-dinitrofluorobenzene (DNFB in mice is a model of in vivo immune response, being an experimental analogue to contact dermatitis in humans. CSR sensitization phase begins after primary contact with antigen, lasting for 10-15 days in humans, and 5-7 days, in mice. Repeated skin exposure to the sensitizing substance leads to its recognition and triggering immune inflammatory mechanisms involving DNFB-specific effector T lymphocytes. The CSR reaches its maximum 18-48 hours after re-exposure to a hapten. There is only scarce information in the literature about effects of flavonoids on CSR, including both stimulatory and inhibitory effects. Flavonoids possessed, predominantly, suppressive effects against the CSR development. In our laboratory, a model of contact sensitivity was reproduced in CBA mice by means of cutaneous sensitization by 2,4-dinitrofluorobenzene. The aim of the study was to identify the mechanisms of immunomodulatory action of quercetin dihydrate and modified bioflavonoids, using the method of adoptive transfer contact sensitivity by splenocytes and T-lymphocytes. As shown in our studies, a 30-min pre-treatment of splenocytes and T-lymphocytes from sensitized mice with modified bioflavonoids before the cell transfer caused complete prevention of contact sensitivity reaction in syngeneic recipient mice. Meanwhile, this effect was not associated with cell death induction due to apoptosis or cytotoxicity. Quercetin dihydrate caused only partially suppression the activity of adaptively formed T-lymphocytes, the contact sensitivity effectors. It was shown that the modified bioflavonoid more stronger suppress adoptive transfer of contact sensitivity in comparison with quercetin dehydrate, without inducing apoptosis of effector cells. Thus, the modified bioflavonoid is a promising compound for further studies in a model of contact sensitivity, due to its higher ability to suppress transfer of CSR with

  17. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization

    Science.gov (United States)

    Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2016-01-01

    Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin sensitization responses for a set of 135 unique chemicals was low (R = 28-43%), although several chemical classes had high concordance. We have succeeded to develop predictive QSAR models of all available human data with the external correct classification rate of 71%. A consensus model integrating concordant QSAR predictions and LLNA results afforded a higher CCR of 82% but at the expense of the reduced external dataset coverage (52%). We used the developed QSAR models for virtual screening of CosIng database and identified 1061 putative skin sensitizers; for seventeen of these compounds, we found published evidence of their skin sensitization effects. Models reported herein provide more accurate alternative to LLNA testing for human skin sensitization assessment across diverse chemical data. In addition, they can also be used to guide the structural optimization of toxic compounds to reduce their skin sensitization potential. PMID:28630595

  18. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    Directory of Open Access Journals (Sweden)

    C. Marrero

    2009-04-01

    Full Text Available In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain. The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW. A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, definition or not of nested grids, horizontal resolution and number of vertical levels. The Factor Separation Method was applied in order to identify the major model sensitivity parameters under this unusual meteorological situation. Results associated to percentage changes relatives to a control run simulation demonstrated that boundary layer and surface layer schemes, horizontal resolutions, hydrostaticity option and nesting grid activation were the model configuration parameters with the greatest impact on the 48 h maximum 10 m horizontal wind speed solution.

  19. Atmospheric statistical dynamic models. Climate experiments: albedo experiments with a zonal atmospheric model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts

  20. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    Science.gov (United States)

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  1. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  2. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  3. Distinguishing bias from sensitivity effects in multialternative detection tasks.

    Science.gov (United States)

    Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I

    2014-08-21

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.

  4. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  5. Design of Experiments : An Overview

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2008-01-01

    Design Of Experiments (DOE) is needed for experiments with real-life systems, and with either deterministic or random simulation models. This contribution discusses the different types of DOE for these three domains, but focusses on random simulation. DOE may have two goals: sensitivity analysis

  6. EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990–2010

    Directory of Open Access Journals (Sweden)

    A. Colette

    2017-09-01

    Full Text Available The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990–2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i anthropogenic emissions, (ii chemical boundary conditions, and (iii meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have – to date – completed the full set of simulations (and 21-year trend calculations have been performed by four models. The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i an evaluation of the models' performances for the three reference years, (ii an evaluation of the skill of the models in capturing observed air pollution trends for the 1990–2010 time period, (iii attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology, (iv a dataset based on a multi-model approach, to provide more robust model

  7. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA.

    Directory of Open Access Journals (Sweden)

    Martin T Schultz

    Full Text Available The environmental DNA (eDNA method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1 collection of a filtered water sample from the source; 2 extraction of DNA from the filter and isolation in a purified elution; 3 removal of aliquots from the elution for use in the polymerase chain reaction (PCR assay; 4 PCR; and 5 genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis and silver carp (H. molitrix assuming sampling protocols used in the Chicago Area Waterway System (CAWS. Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration

  8. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  9. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  10. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  11. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait

    NARCIS (Netherlands)

    Carbone, V.; Krogt, M.M. van der; Koopman, H.F.J.M.; Verdonschot, N.J.

    2016-01-01

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of

  12. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait

    NARCIS (Netherlands)

    Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph

    2016-01-01

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle–tendon (MT) model parameters for each of

  13. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    Science.gov (United States)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  14. Global sensitivity analysis of a model related to memory formation in synapses: Model reduction based on epistemic parameter uncertainties and related issues.

    Science.gov (United States)

    Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya

    2017-04-21

    We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca 2+ patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparison of crop yield sensitivity to ozone between open-top chamber and free-air experiments.

    Science.gov (United States)

    Feng, Zhaozhong; Uddling, Johan; Tang, Haoye; Zhu, Jianguo; Kobayashi, Kazuhiko

    2018-02-02

    Assessments of the impacts of ozone (O 3 ) on regional and global food production are currently based on results from experiments using open-top chambers (OTCs). However, there are concerns that these impact estimates might be biased due to the environmental artifacts imposed by this enclosure system. In this study, we collated O 3 exposure and yield data for three major crop species-wheat, rice, and soybean-for which O 3 experiments have been conducted with OTCs as well as the ecologically more realistic free-air O 3 elevation (O 3 -FACE) exposure system; both within the same cultivation region and country. For all three crops, we found that the sensitivity of crop yield to the O 3 metric AOT40 (accumulated hourly O 3 exposure above a cut-off threshold concentration of 40 ppb) significantly differed between OTC and O 3 -FACE experiments. In wheat and rice, O 3 sensitivity was higher in O 3 -FACE than OTC experiments, while the opposite was the case for soybean. In all three crops, these differences could be linked to factors influencing stomatal conductance (manipulation of water inputs, passive chamber warming, and cultivar differences in gas exchange). Our study thus highlights the importance of accounting for factors that control stomatal O 3 flux when applying experimental data to assess O 3 impacts on crops at large spatial scales. © 2018 John Wiley & Sons Ltd.

  16. Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    2017-11-01

    The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.

  17. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  18. Rejection Sensitivity, Jealousy, and the Relationship to Interpersonal Aggression.

    Science.gov (United States)

    Murphy, Anna M; Russell, Gemma

    2018-07-01

    The development and maintenance of interpersonal relationships lead individuals to risk rejection in the pursuit of acceptance. Some individuals are predisposed to experience a hypersensitivity to rejection that is hypothesized to be related to jealous and aggressive reactions within interpersonal relationships. The current study used convenience sampling to recruit 247 young adults to evaluate the relationship between rejection sensitivity, jealousy, and aggression. A mediation model was used to test three hypotheses: Higher scores of rejection sensitivity would be positively correlated to higher scores of aggression (Hypothesis 1); higher scores of rejection sensitivity would be positively correlated to higher scores of jealousy (Hypothesis 2); jealousy would mediate the relationship between rejection sensitivity and aggression (Hypothesis 3). Study results suggest a tendency for individuals with high rejection sensitivity to experience higher levels of jealousy, and subsequently have a greater propensity for aggression, than individuals with low rejection sensitivity. Future research that substantiates a link between hypersensitivity to rejection, jealousy, and aggression may provide an avenue for prevention, education, or intervention in reducing aggression within interpersonal relationships.

  19. Sensitivity of the ATLAS experiment to discover the decay H{yields} {tau}{tau} {yields}ll+4{nu} of the Standard Model Higgs Boson produced in vector boson fusion

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, Martin

    2011-05-17

    A study of the expected sensitivity of the ATLAS experiment to discover the Standard Model Higgs boson produced via vector boson fusion (VBF) and its decay to H{yields} {tau}{tau}{yields} ll+4{nu} is presented. The study is based on simulated proton-proton collisions at a centre-of-mass energy of 14 TeV. For the first time the discovery potential is evaluated in the presence of additional proton-proton interactions (pile-up) to the process of interest in a complete and consistent way. Special emphasis is placed on the development of background estimation techniques to extract the main background processes Z{yields}{tau}{tau} and t anti t production using data. The t anti t background is estimated using a control sample selected with the VBF analysis cuts and the inverted b-jet veto. The dominant background process Z{yields}{tau}{tau} is estimated using Z{yields}{mu}{mu} events. Replacing the muons of the Z{yields}{mu}{mu} event with simulated {tau}-leptons, Z{yields}{tau}{tau} events are modelled to high precision. For the replacement of the Z boson decay products a dedicated method based on tracks and calorimeter cells is developed. Without pile-up a discovery potential of 3{sigma} to 3.4{sigma} in the mass range 115 GeVsensitivity decreases to 1.7{sigma} to 1.9{sigma} mainly caused by the worse resolution of the reconstructed missing transverse energy.

  20. Influence of selecting secondary settling tank sub-models on the calibration of WWTP models – A global sensitivity analysis using BSM2

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores Alsina, Xavier; Sin, Gürkan

    2014-01-01

    This study investigates the sensitivity of wastewater treatment plant (WWTP) model performance to the selection of one-dimensional secondary settling tanks (1-D SST) models with first-order and second-order mathematical structures. We performed a global sensitivity analysis (GSA) on the benchmark...... simulation model No.2 with the input uncertainty associated to the biokinetic parameters in the activated sludge model No. 1 (ASM1), a fractionation parameter in the primary clarifier, and the settling parameters in the SST model. Based on the parameter sensitivity rankings obtained in this study......, the settling parameters were found to be as influential as the biokinetic parameters on the uncertainty of WWTP model predictions, particularly for biogas production and treated water quality. However, the sensitivity measures were found to be dependent on the 1-D SST models selected. Accordingly, we suggest...

  1. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  2. Privacy Protection Method for Multiple Sensitive Attributes Based on Strong Rule

    Directory of Open Access Journals (Sweden)

    Tong Yi

    2015-01-01

    Full Text Available At present, most studies on data publishing only considered single sensitive attribute, and the works on multiple sensitive attributes are still few. And almost all the existing studies on multiple sensitive attributes had not taken the inherent relationship between sensitive attributes into account, so that adversary can use the background knowledge about this relationship to attack the privacy of users. This paper presents an attack model with the association rules between the sensitive attributes and, accordingly, presents a data publication for multiple sensitive attributes. Through proof and analysis, the new model can prevent adversary from using the background knowledge about association rules to attack privacy, and it is able to get high-quality released information. At last, this paper verifies the above conclusion with experiments.

  3. An analysis, sensitivity and prediction of winter fog events using FASP model over Indo-Gangetic plains, India

    Science.gov (United States)

    Srivastava, S. K., Sr.; Sharma, D. A.; Sachdeva, K.

    2017-12-01

    Indo-Gangetic plains of India experience severe fog conditions during the peak winter months of December and January every year. In this paper an attempt has been to analyze the spatial and temporal variability of winter fog over Indo-Gangetic plains. Further, an attempt has also been made to configure an efficient meso-scale numerical weather prediction model using different parameterization schemes and develop a forecasting tool for prediction of fog during winter months over Indo-Gangetic plains. The study revealed that an alarming increasing positive trend of fog frequency prevails over many locations of IGP. Hot spot and cluster analysis were conducted to identify the high fog prone zones using GIS and inferential statistical tools respectively. Hot spots on an average experiences fog on 68.27% days, it is followed by moderate and cold spots with 48.03% and 21.79% respectively. The study proposes a new FASP (Fog Analysis, sensitivity and prediction) Model for overall analysis and prediction of fog at a particular location and period over IGP. In the first phase of this model long term climatological fog data of a location is analyzed to determine its characteristics and prevailing trend using various advanced statistical techniques. During a second phase a sensitivity test is conducted with different combination of parameterization schemes to determine the most suitable combination for fog simulation over a particular location and period and in the third and final phase, first ARIMA model is used to predict the number of fog days in future . Thereafter, Numerical model is used to predict the various meteorological parameters favourable for fog forecast. Finally, Hybrid model is used for fog forecast over the study location. The results of the FASP model are validated with actual ground based fog data using statistical tools. Forecast Fog-gram generated using hybrid model during Jan 2017 shows highly encouraging results for fog occurrence/Non occurrence between

  4. Sensitivity and Interaction Analysis Based on Sobol’ Method and Its Application in a Distributed Flood Forecasting Model

    Directory of Open Access Journals (Sweden)

    Hui Wan

    2015-06-01

    Full Text Available Sensitivity analysis is a fundamental approach to identify the most significant and sensitive parameters, helping us to understand complex hydrological models, particularly for time-consuming distributed flood forecasting models based on complicated theory with numerous parameters. Based on Sobol’ method, this study compared the sensitivity and interactions of distributed flood forecasting model parameters with and without accounting for correlation. Four objective functions: (1 Nash–Sutcliffe efficiency (ENS; (2 water balance coefficient (WB; (3 peak discharge efficiency (EP; and (4 time to peak efficiency (ETP were implemented to the Liuxihe model with hourly rainfall-runoff data collected in the Nanhua Creek catchment, Pearl River, China. Contrastive results for the sensitivity and interaction analysis were also illustrated among small, medium, and large flood magnitudes. Results demonstrated that the choice of objective functions had no effect on the sensitivity classification, while it had great influence on the sensitivity ranking for both uncorrelated and correlated cases. The Liuxihe model behaved and responded uniquely to various flood conditions. The results also indicated that the pairwise parameters interactions revealed a non-ignorable contribution to the model output variance. Parameters with high first or total order sensitivity indices presented a corresponding high second order sensitivity indices and correlation coefficients with other parameters. Without considering parameter correlations, the variance contributions of highly sensitive parameters might be underestimated and those of normally sensitive parameters might be overestimated. This research laid a basic foundation to improve the understanding of complex model behavior.

  5. Modeling of a Low-Background Spectroscopic Position-Sensitive Neutron Detector

    Energy Technology Data Exchange (ETDEWEB)

    Postovarova, Daria; Evsenin, Alexey; Gorshkov, Igor; Kuznetsov, Andrey; Osetrov, Oleg; Vakhtin, Dmitry; Yurmanov, Pavel [V.G. Khlopin Radium Institute, 194021, 28, 2nd Murinsky pr., Saint-Petersburg (Russian Federation)

    2011-12-13

    A new low-background spectroscopic direction-sensitive neutron detector that would allow one to reduce the neutron background component in passive and active neutron detection techniques is proposed. The detector is based on thermal neutron detectors surrounded by a fast neutron scintillation detector, which serves at the same time as a neutron moderator. Direction sensitivity is achieved by coincidence/anticoincidence analysis between different parts of the scintillator. Results of mathematical modeling of several detector configurations are presented.

  6. Modeling of a Low-Background Spectroscopic Position-Sensitive Neutron Detector

    International Nuclear Information System (INIS)

    Postovarova, Daria; Evsenin, Alexey; Gorshkov, Igor; Kuznetsov, Andrey; Osetrov, Oleg; Vakhtin, Dmitry; Yurmanov, Pavel

    2011-01-01

    A new low-background spectroscopic direction-sensitive neutron detector that would allow one to reduce the neutron background component in passive and active neutron detection techniques is proposed. The detector is based on thermal neutron detectors surrounded by a fast neutron scintillation detector, which serves at the same time as a neutron moderator. Direction sensitivity is achieved by coincidence/anticoincidence analysis between different parts of the scintillator. Results of mathematical modeling of several detector configurations are presented.

  7. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  8. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    effects and the one-at-a-time approach (O.A.T); and (ii), we applied Sobol's global sensitivity analysis method which is based on variance decompositions. Results illustrate that ψm (maximum sorption rate of mobile colloids), kdmc (solute desorption rate from mobile colloids), and Ks (saturated hydraulic conductivity) are the most sensitive parameters with respect to the contaminant travel time. The analyses indicate that this new module is able to simulate the colloid-facilitated contaminant transport. However, validations under laboratory conditions are needed to confirm the occurrence of the colloid transport phenomenon and to understand model prediction under non-saturated soil conditions. Future work will involve monitoring of the colloidal transport phenomenon through soil column experiments. The anticipated outcome will provide valuable information on the understanding of the dominant mechanisms responsible for colloidal transports, colloid-facilitated contaminant transport and, also, the colloid detachment/deposition processes impacts on soil hydraulic properties. References: Šimůnek, J., C. He, L. Pang, & S. A. Bradford, Colloid-Facilitated Solute Transport in Variably Saturated Porous Media: Numerical Model and Experimental Verification, Vadose Zone Journal, 2006, 5, 1035-1047 Šimůnek, J., M. Šejna, & M. Th. van Genuchten, The C-Ride Module for HYDRUS (2D/3D) Simulating Two-Dimensional Colloid-Facilitated Solute Transport in Variably-Saturated Porous Media, Version 1.0, PC Progress, Prague, Czech Republic, 45 pp., 2012.

  9. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  10. Gamma ray induced sensitization in CaSO4:Dy and competing trap model

    International Nuclear Information System (INIS)

    Nagpal, J.S.; Kher, R.K.; Gangadharan, P.

    1979-01-01

    Gamma ray induced sensitization in CaSO 4 :Dy has been compared (by measurement of TL glow curves) for different temperatures during irradiation (25 0 , 120 0 and 250 0 C). Enhanced sensitization at elevated temperatures seems to support the competing trap model for supralinearity and sensitization in CaSO 4 :Dy. (author)

  11. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    OpenAIRE

    Marrero, C.; Jorba, O.; Cuevas, E.; Baldasano, J. M.

    2009-01-01

    In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain). The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW). A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, d...

  12. Sensitivity of tsunami evacuation modeling to direction and land cover assumptions

    Science.gov (United States)

    Schmidtlein, Mathew C.; Wood, Nathan J.

    2015-01-01

    Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.

  13. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  14. Sensitivity study of cloud/radiation interaction using a second order turbulence radiative-convective model

    International Nuclear Information System (INIS)

    Kao, C.Y.J.; Smith, W.S.

    1993-01-01

    A high resolution one-dimensional version of a second order turbulence convective/radiative model, developed at the Los Alamos National Laboratory, was used to conduct a sensitivity study of a stratocumulus cloud deck, based on data taken at San Nicolas Island during the intensive field observation marine stratocumulus phase of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE IFO), conducted during July, 1987. Initial profiles for liquid water potential temperature, and total water mixing ratio were abstracted from the FIRE data. The dependence of the diurnal behavior in liquid water content, cloud top height, and cloud base height were examined for variations in subsidence rate, sea surface temperature, and initial inversion strength. The modelled diurnal variation in the column integrated liquid water agrees quite well with the observed data, for the case of low subsidence. The modelled diurnal behavior for the height of the cloud top and base show qualitative agreement with the FIRE data, although the overall height of the cloud layer is about 200 meters too high

  15. Sensitivity Analysis of an Agent-Based Model of Culture's Consequences for Trade

    NARCIS (Netherlands)

    Burgers, S.L.G.E.; Jonker, C.M.; Hofstede, G.J.; Verwaart, D.

    2010-01-01

    This paper describes the analysis of an agent-based model’s sensitivity to changes in parameters that describe the agents’ cultural background, relational parameters, and parameters of the decision functions. As agent-based models may be very sensitive to small changes in parameter values, it is of

  16. Sensitivity improvement for correlations involving arginine side-chain Nε/Hε resonances in multi-dimensional NMR experiments using broadband 15N 180o pulses

    International Nuclear Information System (INIS)

    Iwahara, Junji; Clore, G. Marius

    2006-01-01

    Due to practical limitations in available 15 N rf field strength, imperfections in 15 N 180 o pulses arising from off-resonance effects can result in significant sensitivity loss, even if the chemical shift offset is relatively small. Indeed, in multi-dimensional NMR experiments optimized for protein backbone amide groups, cross-peaks arising from the Arg guanidino 15 Nε (∼85 ppm) are highly attenuated by the presence of multiple INEPT transfer steps. To improve the sensitivity for correlations involving Arg Nε-Hε groups, we have incorporated 15 N broadband 180 deg. pulses into 3D 15 N-separated NOE-HSQC and HNCACB experiments. Two 15 N-WURST pulses incorporated at the INEPT transfer steps of the 3D 15 N-separated NOE-HSQC pulse sequence resulted in a ∼1.5-fold increase in sensitivity for the Arg Nε-Hε signals at 800 MHz. For the 3D HNCACB experiment, five 15 N Abramovich-Vega pulses were incorporated for broadband inversion and refocusing, and the sensitivity of Arg 1 Hε- 15 Nε- 13 Cγ/ 13 Cδ correlation peaks was enhanced by a factor of ∼1.7 at 500 MHz. These experiments eliminate the necessity for additional experiments to assign Arg 1 Hε and 15 Nε resonances. In addition, the increased sensitivity afforded for the detection of NOE cross-peaks involving correlations with the 15 Nε/ 1 Hε of Arg in 3D 15 N-separated NOE experiments should prove to be very useful for structural analysis of interactions involving Arg side-chains

  17. Environmental sensitivity: equivocal illness in the context of place.

    Science.gov (United States)

    Fletcher, Christopher M

    2006-03-01

    This article presents a phenomenologically oriented description of the interaction of illness experience, social context, and place. This is used to explore an outbreak of environmental sensitivities in Nova Scotia, Canada. Environmental Sensitivity (ES) is a popular designation for bodily reactions to mundane environmental stimuli that are insignificant for most people. Mainstream medicine cannot support the popular models of this disease process and consequently illness experience is subject to ambiguity and contestation. As an 'equivocal illness', ES generates considerable social action around the nature, meaning and validity of suffering. Sense of place plays an important role in this process. In this case, the meanings that accrue to illness experience and that produce salient popular disease etiology are grounded in the experience and social construction of the Nova Scotian landscape over time. Shifting representations of place are reflected in illness experience and the meanings that arise around illness are emplaced in landscape.

  18. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    Science.gov (United States)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model

  19. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  20. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  1. In silico modeling predicts drug sensitivity of patient-derived cancer cells.

    Science.gov (United States)

    Pingle, Sandeep C; Sultana, Zeba; Pastorino, Sandra; Jiang, Pengfei; Mukthavaram, Rajesh; Chao, Ying; Bharati, Ila Sri; Nomura, Natsuko; Makale, Milan; Abbasi, Taher; Kapoor, Shweta; Kumar, Ansu; Usmani, Shahabuddin; Agrawal, Ashish; Vali, Shireen; Kesari, Santosh

    2014-05-21

    Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling ("omics") data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.

  2. Neutron and gamma sensitivities of self-powered detectors: Monte Carlo modelling

    Energy Technology Data Exchange (ETDEWEB)

    Vermeeren, Ludo [SCK-CEN, Nuclear Research Centre, Boeretang 200, B-2400 Mol, (Belgium)

    2015-07-01

    This paper deals with the development of a detailed Monte Carlo approach for the calculation of the absolute neutron sensitivity of SPNDs, which makes use of the MCNP code. We will explain the calculation approach, including the activation and beta emission steps, the gamma-electron interactions, the charge deposition in the various detector parts and the effect of the space charge field in the insulator. The model can also be applied for the calculation of the gamma sensitivity of self-powered detectors and for the radiation-induced currents in signal cables. The model yields detailed information on the various contributions to the sensor currents, with distinct response times. Results for the neutron sensitivity of various types of SPNDs are in excellent agreement with experimental data obtained at the BR2 research reactor. For typical neutron to gamma flux ratios, the calculated gamma induced SPND currents are significantly lower than the neutron induced currents. The gamma sensitivity depends very strongly upon the immediate detector surroundings and on the gamma spectrum. Our calculation method opens the way to a reliable on-line determination of the absolute in-pile thermal neutron flux. (authors)

  3. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system.

    Science.gov (United States)

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.

  4. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    Directory of Open Access Journals (Sweden)

    Annie eLumen

    2015-05-01

    Full Text Available A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local

  5. Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions

    Directory of Open Access Journals (Sweden)

    Sebastian Bab

    2012-04-01

    Full Text Available The concept of dynamic coalitions (also virtual organizations describes the temporary interconnection of autonomous agents, who share information or resources in order to achieve a common goal. Through modern technologies these coalitions may form across company, organization and system borders. Therefor questions of access control and security are of vital significance for the architectures supporting these coalitions. In this paper, we present our first steps to reach a formal framework for modeling and verifying the design of privacy-sensitive dynamic coalition infrastructures and their processes. In order to do so we extend existing dynamic coalition modeling approaches with an access-control-concept, which manages access to information through policies. Furthermore we regard the processes underlying these coalitions and present first works in formalizing these processes. As a result of the present paper we illustrate the usefulness of the Abstract State Machine (ASM method for this task. We demonstrate a formal treatment of privacy-sensitive dynamic coalitions by two example ASMs which model certain access control situations. A logical consideration of these ASMs can lead to a better understanding and a verification of the ASMs according to the aspired specification.

  6. Ice-sheet model sensitivities to environmental forcing and their use in projecting future sea level (the SeaRISE project)

    OpenAIRE

    Bindschadler, Robert A.; Nowicki, Sophie; Abe-Ouchi, Ayako; Aschwanden, Andy; Choi, Hyeungu; Fastook, Jim; Granzow, Glen; Greve, Ralf; Gutowski, Gail; Herzfeld, Ute; Jackson, Charles; Johnson, Jesse; Khroulev, Constantine; Levermann, Anders; Lipscomb, William H.

    2013-01-01

    Ten ice-sheet models are used to study sensitivity of the Greenland and Antarctic ice sheets to prescribed changes of surface mass balance, sub-ice-shelf melting and basal sliding. Results exhibit a large range in projected contributions to sea-level change. In most cases, the ice volume above flotation lost is linearly dependent on the strength of the forcing. Combinations of forcings can be closely approximated by linearly summing the contributions from single forcing experiments, suggestin...

  7. Application of the pertubation theory to a two channels model for sensitivity calculations in PWR cores

    International Nuclear Information System (INIS)

    Oliveira, A.C.J.G. de; Andrade Lima, F.R. de

    1989-01-01

    The present work is an application of the perturbation theory (Matricial formalism) to a simplified two channels model, for sensitivity calculations in PWR cores. Expressions for some sensitivity coefficients of thermohydraulic interest were developed from the proposed model. The code CASNUR.FOR was written in FORTRAN to evaluate these sensitivity coefficients. The comparison between results obtained from the matrical formalism of pertubation theory with those obtained directly from the two channels model, makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations. (author) [pt

  8. Cross-section sensitivity and uncertainty analysis of the FNG copper benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kodeli, I., E-mail: ivan.kodeli@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Kondo, K. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany); Japan Atomic Energy Agency, Rokkasho-mura (Japan); Perel, R.L. [Racah Institute of Physics, Hebrew University of Jerusalem, IL-91904 Jerusalem (Israel); Fischer, U. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany)

    2016-11-01

    A neutronics benchmark experiment on copper assembly was performed end 2014–beginning 2015 at the 14-MeV Frascati neutron generator (FNG) of ENEA Frascati with the objective to provide the experimental database required for the validation of the copper nuclear data relevant for ITER design calculations, including the related uncertainties. The paper presents the pre- and post-analysis of the experiment performed using cross-section sensitivity and uncertainty codes, both deterministic (SUSD3D) and Monte Carlo (MCSEN5). Cumulative reaction rates and neutron flux spectra, their sensitivity to the cross sections, as well as the corresponding uncertainties were estimated for different selected detector positions up to ∼58 cm in the copper assembly. This permitted in the pre-analysis phase to optimize the geometry, the detector positions and the choice of activation reactions, and in the post-analysis phase to interpret the results of the measurements and the calculations, to conclude on the quality of the relevant nuclear cross-section data, and to estimate the uncertainties in the calculated nuclear responses and fluxes. Large uncertainties in the calculated reaction rates and neutron spectra of up to 50%, rarely observed at this level in the benchmark analysis using today's nuclear data, were predicted, particularly high for fast reactions. Observed C/E (dis)agreements with values as low as 0.5 partly confirm these predictions. Benchmark results are therefore expected to contribute to the improvement of both cross section as well as covariance data evaluations.

  9. Climate forcings and climate sensitivities diagnosed from atmospheric global circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Bruce T. [Boston University, Department of Geography and Environment, Boston, MA (United States); Knight, Jeff R.; Ringer, Mark A. [Met Office Hadley Centre, Exeter (United Kingdom); Deser, Clara; Phillips, Adam S. [National Center for Atmospheric Research, Boulder, CO (United States); Yoon, Jin-Ho [University of Maryland, Cooperative Institute for Climate and Satellites, Earth System Science Interdisciplinary Center, College Park, MD (United States); Cherchi, Annalisa [Centro Euro-Mediterraneo per i Cambiamenti Climatici, and Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy)

    2010-12-15

    Understanding the historical and future response of the global climate system to anthropogenic emissions of radiatively active atmospheric constituents has become a timely and compelling concern. At present, however, there are uncertainties in: the total radiative forcing associated with changes in the chemical composition of the atmosphere; the effective forcing applied to the climate system resulting from a (temporary) reduction via ocean-heat uptake; and the strength of the climate feedbacks that subsequently modify this forcing. Here a set of analyses derived from atmospheric general circulation model simulations are used to estimate the effective and total radiative forcing of the observed climate system due to anthropogenic emissions over the last 50 years of the twentieth century. They are also used to estimate the sensitivity of the observed climate system to these emissions, as well as the expected change in global surface temperatures once the climate system returns to radiative equilibrium. Results indicate that estimates of the effective radiative forcing and total radiative forcing associated with historical anthropogenic emissions differ across models. In addition estimates of the historical sensitivity of the climate to these emissions differ across models. However, results suggest that the variations in climate sensitivity and total climate forcing are not independent, and that the two vary inversely with respect to one another. As such, expected equilibrium temperature changes, which are given by the product of the total radiative forcing and the climate sensitivity, are relatively constant between models, particularly in comparison to results in which the total radiative forcing is assumed constant. Implications of these results for projected future climate forcings and subsequent responses are also discussed. (orig.)

  10. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya

    2017-10-03

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  11. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya; Kalligiannaki, Evangelia; Tempone, Raul

    2017-01-01

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  12. General classification and analysis of neutron β-decay experiments

    International Nuclear Information System (INIS)

    Gudkov, V.; Greene, G.L.; Calarco, J.R.

    2006-01-01

    A general analysis of the sensitivities of neutron β-decay experiments to manifestations of possible interaction beyond the standard model is carried out. In a consistent fashion, we take into account all known radiative and recoil corrections arising in the standard model. This provides a description of angular correlations in neutron decay in terms of one parameter, which is accurate to the level of ∼10 -5 . Based on this general expression, we present an analysis of the sensitivities to new physics for selected neutron decay experiments. We emphasize that the usual parametrization of experiments in terms of the tree-level coefficients a,A, and B is inadequate when the experimental sensitivities are at the same or higher level relative to the size of the corrections to the tree-level description

  13. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  14. On Parametric Sensitivity of Reynolds-Averaged Navier-Stokes SST Turbulence Model: 2D Hypersonic Shock-Wave Boundary Layer Interactions

    Science.gov (United States)

    Brown, James L.

    2014-01-01

    Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.

  15. The Role of Sea Ice in 2 x CO2 Climate Model Sensitivity. Part 2; Hemispheric Dependencies

    Science.gov (United States)

    Rind, D.; Healy, R.; Parkinson, C.; Martinson, D.

    1997-01-01

    How sensitive are doubled CO2 simulations to GCM control-run sea ice thickness and extent? This issue is examined in a series of 10 control-run simulations with different sea ice and corresponding doubled CO2 simulations. Results show that with increased control-run sea ice coverage in the Southern Hemisphere, temperature sensitivity with climate change is enhanced, while there is little effect on temperature sensitivity of (reasonable) variations in control-run sea ice thickness. In the Northern Hemisphere the situation is reversed: sea ice thickness is the key parameter, while (reasonable) variations in control-run sea ice coverage are of less importance. In both cases, the quantity of sea ice that can be removed in the warmer climate is the determining factor. Overall, the Southern Hemisphere sea ice coverage change had a larger impact on global temperature, because Northern Hemisphere sea ice was sufficiently thick to limit its response to doubled CO2, and sea ice changes generally occurred at higher latitudes, reducing the sea ice-albedo feedback. In both these experiments and earlier ones in which sea ice was not allowed to change, the model displayed a sensitivity of -0.02 C global warming per percent change in Southern Hemisphere sea ice coverage.

  16. Jagiellonian University Searches for the Standard Model Higgs boson decay to $\\tau $ lepton pairs at the CMS experiment

    CERN Document Server

    Pyskir, Andrzej

    2017-01-01

    We present results of searches for the Standard Model Higgs boson decaying to tau lepton pairs at the CMS experiment with data collected during the LHC Run 1. We also present some insight into the analysis with Run 2 data. CP sensitive variables are described and an experimental method of probing CP of the Higgs boson is presented.

  17. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  18. Sensitivity analysis of efficiency thermal energy storage on selected rock mass and grout parameters using design of experiment method

    International Nuclear Information System (INIS)

    Wołoszyn, Jerzy; Gołaś, Andrzej

    2014-01-01

    Highlights: • Paper propose a new methodology to sensitivity study of underground thermal storage. • Using MDF model and DOE technique significantly shorter of calculations time. • Calculation of one time step was equal to approximately 57 s. • Sensitivity study cover five thermo-physical parameters. • Conductivity of rock mass and grout material have a significant impact on efficiency. - Abstract: The aim of this study was to investigate the influence of selected parameters on the efficiency of underground thermal energy storage. In this paper, besides thermal conductivity, the effect of such parameters as specific heat, density of the rock mass, thermal conductivity and specific heat of grout material was investigated. Implementation of this objective requires the use of an efficient computational method. The aim of the research was achieved by using a new numerical model, Multi Degree of Freedom (MDF), as developed by the authors and Design of Experiment (DoE) techniques with a response surface. The presented methodology can significantly reduce the time that is needed for research and to determine the effect of various parameters on the efficiency of underground thermal energy storage. Preliminary results of the research confirmed that thermal conductivity of the rock mass has the greatest impact on the efficiency of underground thermal energy storage, and that other parameters also play quite significant role

  19. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  20. Modeling Lake Storage Dynamics to support Arctic Boreal Vulnerability Experiment (ABoVE)

    Science.gov (United States)

    Vimal, S.; Lettenmaier, D. P.; Smith, L. C.; Smith, S.; Bowling, L. C.; Pavelsky, T.

    2017-12-01

    The Arctic and Boreal Zone (ABZ) of Canada and Alaska includes vast areas of permafrost, lakes, and wetlands. Permafrost thawing in this area is expected to increase due to the projected rise of temperature caused by climate change. Over the long term, this may reduce overall surface water area, but in the near-term, the opposite is being observed, with rising paludification (lake/wetland expansion). One element of NASA's ABoVE field experiment is observations of lake and wetland extent and surface elevations using NASA's AirSWOT airborne interferometric radar, accompanied by a high-resolution camera. One use of the WSE retrievals will be to constrain model estimates of lake storage dynamics. Here, we compare predictions using the lake dynamics algorithm within the Variable Infiltration Capacity (VIC) land surface scheme. The VIC lake algorithm includes representation of sub-grid topography, where the depth and area of seasonally-flooded areas are modeled as a function of topographic wetness index, basin area, and slope. The topography data used is from a new global digital elevation model, MERIT-DEM. We initially set up VIC at sites with varying permafrost conditions (i.e., no permafrost, discontinuous, continuous) in Saskatoon and Yellowknife, Canada, and Toolik Lake, Alaska. We constrained the uncalibrated model with the WSE at the time of the first ABoVE flight, and quantified the model's ability to predict WSE and ΔWSE during the time of the second flight. Finally, we evaluated the sensitivity of the VIC-lakes model and compared the three permafrost conditions. Our results quantify the sensitivity of surface water to permafrost state across the target sites. Furthermore, our evaluation of the lake modeling framework contributes to the modeling and mapping framework for lake and reservoir storage change evaluation globally as part of the SWOT mission, planned for launch in 2021.

  1. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO2 Enrichment experiment.

    Science.gov (United States)

    De Kauwe, Martin G; Medlyn, Belinda E; Walker, Anthony P; Zaehle, Sönke; Asao, Shinichi; Guenet, Bertrand; Harper, Anna B; Hickler, Thomas; Jain, Atul K; Luo, Yiqi; Lu, Xingjie; Luus, Kristina; Parton, William J; Shu, Shijie; Wang, Ying-Ping; Werner, Christian; Xia, Jianyang; Pendall, Elise; Morgan, Jack A; Ryan, Edmund M; Carrillo, Yolima; Dijkstra, Feike A; Zelikova, Tamara J; Norby, Richard J

    2017-09-01

    Multifactor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date, such models have only been tested against single-factor experiments. We applied 10 TBMs to the multifactor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2  yr -1 ). Comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the N cycle models, N availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO 2 -induced water savings to extend the growing season length. Observed interactive (CO 2  × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change. © 2017 John Wiley & Sons Ltd.

  2. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    Science.gov (United States)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more

  3. On sensitivity of gamma families to the model of nuclear interaction

    International Nuclear Information System (INIS)

    Krys, A.; Tomaszewski, A.; Wrotniak, J.A.

    1980-01-01

    A variety of 5 different models of nuclear interaction has been used in a Monte Carlo simulation of nuclear and electromagnetic showers in the atmosphere. The gamma families obtained from this simulation were processed in a way, analogous to one employed in analysis of Pamir experimental results. The sensitivity of observed pattern to the nuclear interaction model assumptions was investigated. Such sensitivity, though not a strong one, was found. In case of longitudinal (or energetical) family characteristics, the changes in nuclear interaction should be really large, if they were to be reflected in the experimental data -with all methodical error possibilities. The transverse characteristics of gamma families are more sensitive to the assumed transverse momentum distribution, but they feel the longitudinal features of nuclear interaction as well. Additionally, there was tested the dependence of observed family pattern on some methodical effects (resolving power of X-ray film, radial cut-off and energy underestimation.) (author)

  4. Response to the eruption of Mount Pinatubo in relation to climate sensitivity in the CMIP3 models

    Energy Technology Data Exchange (ETDEWEB)

    Bender, Frida A.M.; Ekman, Annica M.L.; Rodhe, Henning [Stockholm University, Department of Meteorology, Stockholm (Sweden)

    2010-10-15

    The radiative flux perturbations and subsequent temperature responses in relation to the eruption of Mount Pinatubo in 1991 are studied in the ten general circulation models incorporated in the Coupled Model Intercomparison Project, phase 3 (CMIP3), that include a parameterization of volcanic aerosol. Models and observations show decreases in global mean temperature of up to 0.5 K, in response to radiative perturbations of up to 10 W m{sup -2}, averaged over the tropics. The time scale representing the delay between radiative perturbation and temperature response is determined by the slow ocean response, and is estimated to be centered around 4 months in the models. Although the magnitude of the temperature response to a volcanic eruption has previously been used as an indicator of equilibrium climate sensitivity in models, we find these two quantities to be only weakly correlated. This may partly be due to the fact that the size of the volcano-induced radiative perturbation varies among the models. It is found that the magnitude of the modelled radiative perturbation increases with decreasing climate sensitivity, with the exception of one outlying model. Therefore, we scale the temperature perturbation by the radiative perturbation in each model, and use the ratio between the integrated temperature perturbation and the integrated radiative perturbation as a measure of sensitivity to volcanic forcing. This ratio is found to be well correlated with the model climate sensitivity, more sensitive models having a larger ratio. Further, if this correspondence between ''volcanic sensitivity'' and sensitivity to CO{sub 2} forcing is a feature not only among the models, but also of the real climate system, the alleged linear relation can be used to estimate the real climate sensitivity. The observational value of the ratio signifying volcanic sensitivity is hereby estimated to correspond to an equilibrium climate sensitivity, i.e. equilibrium temperature

  5. Sensitivity analysis for reactivity parameter change of the creole experiment caused by the differences between ENDF-BVII and JENDL neutron cross section evaluations

    International Nuclear Information System (INIS)

    Boulaich, Y.; Bardouni, C.; Elyounoussi, C.; Elbakkari, H.; Boukhal, H.; Erradi, L.; Nacir, B.

    2011-01-01

    Full text: In this work, we present our analysis of the CREOLE experiment on the parameter by using the three-dimensional continuous energy code (MCNPS) and the last updated nuclear data evaluations. This experiment performed in the EOLE critical facility located at CEA-Cadarache, was dedicated to studies for both UO2 and UO2-PuO2 PWR type lattices covering the whole temperature range from 20 0 C to 300 0 C. We have developed an accurate model of the EOLE reactor to be used by the MCNP5 Monte Carlo code. This model guarantees a high level of fidelity in the description of different configurations at various temperatures taking into account their consequence on neutron cross section data and all thermal expansion effects. In this case, the remaining error between calculation and experiment will be awarded mainly to uncertainties on nuclear data. Our own cross section library was constructed by using NJOY99.259 code with point-wise nuclear data based on ENDF-BVII. JEFF3.1, JENDL3.3 and JENDL4 evaluation files. The MCNP model was validated through the axial and radial fission rate measurements at room and hot temperatures. Calculation-experiment discrepancies of the reactivity parameter were analyzed and the results have shown that the JENDL evaluations give the most consistent values. In order to specify the source of the relatively large difference between experiment and calculation due to ENDF-BVII nuclear data evaluation, the discrepancy in reactivity between ENDF-BVII and JENDL evaluations was decomposed using sensitivity and uncertainty analysis technique

  6. Evaluating Weather Research and Forecasting Model Sensitivity to Land and Soil Conditions Representative of Karst Landscapes

    Science.gov (United States)

    Johnson, Christopher M.; Fan, Xingang; Mahmood, Rezaul; Groves, Chris; Polk, Jason S.; Yan, Jun

    2018-03-01

    Due to their particular physiographic, geomorphic, soil cover, and complex surface-subsurface hydrologic conditions, karst regions produce distinct land-atmosphere interactions. It has been found that floods and droughts over karst regions can be more pronounced than those in non-karst regions following a given rainfall event. Five convective weather events are simulated using the Weather Research and Forecasting model to explore the potential impacts of land-surface conditions on weather simulations over karst regions. Since no existing weather or climate model has the ability to represent karst landscapes, simulation experiments in this exploratory study consist of a control (default land-cover/soil types) and three land-surface conditions, including barren ground, forest, and sandy soils over the karst areas, which mimic certain karst characteristics. Results from sensitivity experiments are compared with the control simulation, as well as with the National Centers for Environmental Prediction multi-sensor precipitation analysis Stage-IV data, and near-surface atmospheric observations. Mesoscale features of surface energy partition, surface water and energy exchange, the resulting surface-air temperature and humidity, and low-level instability and convective energy are analyzed to investigate the potential land-surface impact on weather over karst regions. We conclude that: (1) barren ground used over karst regions has a pronounced effect on the overall simulation of precipitation. Barren ground provides the overall lowest root-mean-square errors and bias scores in precipitation over the peak-rain periods. Contingency table-based equitable threat and frequency bias scores suggest that the barren and forest experiments are more successful in simulating light to moderate rainfall. Variables dependent on local surface conditions show stronger contrasts between karst and non-karst regions than variables dominated by large-scale synoptic systems; (2) significant

  7. Sleep fragmentation exacerbates mechanical hypersensitivity and alters subsequent sleep-wake behavior in a mouse model of musculoskeletal sensitization.

    Science.gov (United States)

    Sutton, Blair C; Opp, Mark R

    2014-03-01

    Sleep deprivation, or sleep disruption, enhances pain in human subjects. Chronic musculoskeletal pain is prevalent in our society, and constitutes a tremendous public health burden. Although preclinical models of neuropathic and inflammatory pain demonstrate effects on sleep, few studies focus on musculoskeletal pain. We reported elsewhere in this issue of SLEEP that musculoskeletal sensitization alters sleep of mice. In this study we hypothesize that sleep fragmentation during the development of musculoskeletal sensitization will exacerbate subsequent pain responses and alter sleep-wake behavior of mice. This is a preclinical study using C57BL/6J mice to determine the effect on behavioral outcomes of sleep fragmentation combined with musculoskeletal sensitization. Musculoskeletal sensitization, a model of chronic muscle pain, was induced using two unilateral injections of acidified saline (pH 4.0) into the gastrocnemius muscle, spaced 5 days apart. Musculoskeletal sensitization manifests as mechanical hypersensitivity determined by von Frey filament testing at the hindpaws. Sleep fragmentation took place during the consecutive 12-h light periods of the 5 days between intramuscular injections. Electroencephalogram (EEG) and body temperature were recorded from some mice at baseline and for 3 weeks after musculoskeletal sensitization. Mechanical hypersensitivity was determined at preinjection baseline and on days 1, 3, 7, 14, and 21 after sensitization. Two additional experiments were conducted to determine the independent effects of sleep fragmentation or musculoskeletal sensitization on mechanical hypersensitivity. Five days of sleep fragmentation alone did not induce mechanical hypersensitivity, whereas sleep fragmentation combined with musculoskeletal sensitization resulted in prolonged and exacerbated mechanical hypersensitivity. Sleep fragmentation combined with musculoskeletal sensitization had an effect on subsequent sleep of mice as demonstrated by increased

  8. Sensitivity analysis of model output - a step towards robust safety indicators?

    International Nuclear Information System (INIS)

    Broed, R.; Pereira, A.; Moberg, L.

    2004-01-01

    The protection of the environment from ionising radiation challenges the radioecological community with the issue of harmonising disparate safety indicators. These indicators should preferably cover the whole spectrum of model predictions on chemo-toxic and radiation impact of contaminants. In question is not only the protection of man and biota but also of abiotic systems. In many cases modelling will constitute the basis for an evaluation of potential impact. It is recognised that uncertainty and sensitivity analysis of model output will play an important role in the 'construction' of safety indicators that are robust, reliable and easy to explain to all groups of stakeholders including the general public. However, environmental models of transport of radionuclides have some extreme characteristics. They are, a) complex, b) non-linear, c) include a huge number of input parameters, d) input parameters are associated with large or very large uncertainties, e) parameters are often correlated to each other, f) uncertainties other than parameter-driven may be present in the modelling system, g) space variability and time-dependence of parameters are present, h) model predictions may cover geological time scales. Consequently, uncertainty and sensitivity analysis are non-trivial tasks, challenging the decision-maker when it comes to the interpretation of safety indicators or the application of regulatory criteria. In this work we use the IAEA model ISAM, to make a set of Monte Carlo calculations. The ISAM model includes several nuclides and decay chains, many compartments and variable parameters covering the range of nuclide migration pathways from the near field to the biosphere. The goal of our calculations is to make a global sensitivity analysis. After extracting the non-influential parameters, the M.C. calculations are repeated with those parameters frozen. Reducing the number of parameters to a few ones will simplify the interpretation of the results and the use

  9. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  10. Testing the Nanoparticle-Allostatic Cross Adaptation-Sensitization Model for Homeopathic Remedy Effects

    Science.gov (United States)

    Bell, Iris R.; Koithan, Mary; Brooks, Audrey J.

    2012-01-01

    Key concepts of the Nanoparticle-Allostatic Cross-Adaptation-Sensitization (NPCAS) Model for the action of homeopathic remedies in living systems include source nanoparticles as low level environmental stressors, heterotypic hormesis, cross-adaptation, allostasis (stress response network), time-dependent sensitization with endogenous amplification and bidirectional change, and self-organizing complex adaptive systems. The model accommodates the requirement for measurable physical agents in the remedy (source nanoparticles and/or source adsorbed to silica nanoparticles). Hormetic adaptive responses in the organism, triggered by nanoparticles; bipolar, metaplastic change, dependent on the history of the organism. Clinical matching of the patient’s symptom picture, including modalities, to the symptom pattern that the source material can cause (cross-adaptation and cross-sensitization). Evidence for nanoparticle-related quantum macro-entanglement in homeopathic pathogenetic trials. This paper examines research implications of the model, discussing the following hypotheses: Variability in nanoparticle size, morphology, and aggregation affects remedy properties and reproducibility of findings. Homeopathic remedies modulate adaptive allostatic responses, with multiple dynamic short- and long-term effects. Simillimum remedy nanoparticles, as novel mild stressors corresponding to the organism’s dysfunction initiate time-dependent cross-sensitization, reversing the direction of dysfunctional reactivity to environmental stressors. The NPCAS model suggests a way forward for systematic research on homeopathy. The central proposition is that homeopathic treatment is a form of nanomedicine acting by modulation of endogenous adaptation and metaplastic amplification processes in the organism to enhance long-term systemic resilience and health. PMID:23290882

  11. Modeling of laser-driven hydrodynamics experiments

    Science.gov (United States)

    di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul

    2017-10-01

    Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.

  12. Developing cultural sensitivity

    DEFF Research Database (Denmark)

    Ruddock, Heidi; Turner, deSalle

    2007-01-01

    . Background. Many countries are becoming culturally diverse, but healthcare systems and nursing education often remain mono-cultural and focused on the norms and needs of the majority culture. To meet the needs of all members of multicultural societies, nurses need to develop cultural sensitivity......Title. Developing cultural sensitivity: nursing students’ experiences of a study abroad programme Aim. This paper is a report of a study to explore whether having an international learning experience as part of a nursing education programme promoted cultural sensitivity in nursing students...... and incorporate this into caregiving. Method. A Gadamerian hermeneutic phenomenological approach was adopted. Data were collected in 2004 by using in-depth conversational interviews and analysed using the Turner method. Findings. Developing cultural sensitivity involves a complex interplay between becoming...

  13. Modeling of Yb3+-sensitized Er3+-doped silica waveguide amplifiers

    DEFF Research Database (Denmark)

    Lester, Christian; Bjarklev, Anders Overgaard; Rasmussen, Thomas

    1995-01-01

    A model for Yb3+-sensitized Er3+-doped silica waveguide amplifiers is described and numerically investigated in the small-signal regime. The amplified spontaneous emission in the ytterbium-band and the quenching process between excited erbium ions are included in the model. For pump wavelengths...

  14. Application of Best Estimate Approach for Modelling of QUENCH-03 and QUENCH-06 Experiments

    Directory of Open Access Journals (Sweden)

    Tadas Kaliatka

    2016-04-01

    In this article, the QUENCH-03 and QUENCH-06 experiments are modelled using ASTEC and RELAP/SCDAPSIM codes. For the uncertainty and sensitivity analysis, SUSA3.5 and SUNSET tools were used. The article demonstrates that applying the best estimate approach, it is possible to develop basic QUENCH input deck and to develop the two sets of input parameters, covering maximal and minimal ranges of uncertainties. These allow simulating different (but with the same nature tests, receiving calculation results with the evaluated range of uncertainties.

  15. Extracting Models in Single Molecule Experiments

    Science.gov (United States)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  16. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  17. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  18. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  19. [Application of Fourier amplitude sensitivity test in Chinese healthy volunteer population pharmacokinetic model of tacrolimus].

    Science.gov (United States)

    Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei

    2010-07-01

    In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.

  20. Probabilistic Sensitivity Amplification Control for Lower Extremity Exoskeleton

    Directory of Open Access Journals (Sweden)

    Likun Wang

    2018-03-01

    Full Text Available To achieve ideal force control of a functional autonomous exoskeleton, sensitivity amplification control is widely used in human strength augmentation applications. The original sensitivity amplification control aims to increase the closed-loop control system sensitivity based on positive feedback without any sensors between the pilot and the exoskeleton. Thus, the measurement system can be greatly simplified. Nevertheless, the controller lacks the ability to reject disturbance and has little robustness to the variation of the parameters. Consequently, a relatively precise dynamic model of the exoskeleton system is desired. Moreover, the human-robot interaction (HRI cannot be interpreted merely as a particular part of the driven torque quantitatively. Therefore, a novel control methodology, so-called probabilistic sensitivity amplification control, is presented in this paper. The innovation of the proposed control algorithm is two-fold: distributed hidden-state identification based on sensor observations and evolving learning of sensitivity factors for the purpose of dealing with the variational HRI. Compared to the other state-of-the-art algorithms, we verify the feasibility of the probabilistic sensitivity amplification control with several experiments, i.e., distributed identification model learning and walking with a human subject. The experimental result shows potential application feasibility.

  1. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    Science.gov (United States)

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Short ensembles: An Efficient Method for Discerning Climate-relevant Sensitivities in Atmospheric General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun

    2014-09-08

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.

  3. Climate Change Sensitivity of Multi-Species Afforestation in Semi-Arid Benin

    Directory of Open Access Journals (Sweden)

    Florent Noulèkoun

    2018-06-01

    Full Text Available The early growth stage is critical in the response of trees to climate change and variability. It is not clear, however, what climate metrics are best to define the early-growth sensitivity in assessing adaptation strategies of young forests to climate change. Using a combination of field experiments and modelling, we assessed the climate sensitivity of two promising afforestation species, Jatropha curcas L. and Moringa oleifera Lam., by analyzing their predicted climate–growth relationships in the initial two years after planting on degraded cropland in the semi-arid zone of Benin. The process-based WaNuLCAS model (version 4.3, World Agroforestry Centre, Bogor, Indonesia was used to simulate aboveground biomass growth for each year in the climate record (1981–2016, either as the first or as the second year of tree growth. Linear mixed models related the annual biomass growth to climate indicators, and climate sensitivity indices quantified climate–growth relationships. In the first year, the length of dry spells had the strongest effect on tree growth. In the following year, the annual water deficit and length of dry season became the strongest predictors. Simulated rooting depths greater than those observed in the experiments enhanced biomass growth under extreme dry conditions and reduced sapling sensitivity to drought. Projected increases in aridity implied significant growth reduction, but a multi-species approach to afforestation using species that are able to develop deep-penetrating roots should increase the resilience of young forests to climate change. The results illustrate that process-based modelling, combined with field experiments, can be effective in assessing the climate–growth relationships of tree species.

  4. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  5. Commensurate comparisons of models with energy budget observations reveal consistent climate sensitivities

    Science.gov (United States)

    Armour, K.

    2017-12-01

    Global energy budget observations have been widely used to constrain the effective, or instantaneous climate sensitivity (ICS), producing median estimates around 2°C (Otto et al. 2013; Lewis & Curry 2015). A key question is whether the comprehensive climate models used to project future warming are consistent with these energy budget estimates of ICS. Yet, performing such comparisons has proven challenging. Within models, values of ICS robustly vary over time, as surface temperature patterns evolve with transient warming, and are generally smaller than the values of equilibrium climate sensitivity (ECS). Naively comparing values of ECS in CMIP5 models (median of about 3.4°C) to observation-based values of ICS has led to the suggestion that models are overly sensitive. This apparent discrepancy can partially be resolved by (i) comparing observation-based values of ICS to model values of ICS relevant for historical warming (Armour 2017; Proistosescu & Huybers 2017); (ii) taking into account the "efficacies" of non-CO2 radiative forcing agents (Marvel et al. 2015); and (iii) accounting for the sparseness of historical temperature observations and differences in sea-surface temperature and near-surface air temperature over the oceans (Richardson et al. 2016). Another potential source of discrepancy is a mismatch between observed and simulated surface temperature patterns over recent decades, due to either natural variability or model deficiencies in simulating historical warming patterns. The nature of the mismatch is such that simulated patterns can lead to more positive radiative feedbacks (higher ICS) relative to those engendered by observed patterns. The magnitude of this effect has not yet been addressed. Here we outline an approach to perform fully commensurate comparisons of climate models with global energy budget observations that take all of the above effects into account. We find that when apples-to-apples comparisons are made, values of ICS in models are

  6. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  7. First results from a second generation galactic axion experiment

    CERN Document Server

    Hagmann, C A; Stoeffl, W; Van Bibber, K; Daw, E J; McBride, J; Peng, H; Rosenberg, L J; Xin, H; La Veigne, J D; Sikivie, P; Sullivan, N; Tanner, D B; Moltz, D M; Nezrick, F A; Turner, M; Golubev, N A; Kravchuk, L V

    1996-01-01

    We report first results from a large scale search for dark matter axions. The experiment probes axion masses of 1.3-13 micro-eV at a sensitivity which is about 50 times higher than previous pilot experiments. We have already scanned part of this mass range at a sensitivity better than required to see at least one generic axion model, the KSVZ axion. Data taking at full sensitivity commenced in February 1996 and scanning the proposed mass range will require three years.

  8. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  9. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  10. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Science.gov (United States)

    Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda

    2016-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102

  11. Mixed-time parallel evolution in multiple quantum NMR experiments: sensitivity and resolution enhancement in heteronuclear NMR

    International Nuclear Information System (INIS)

    Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad

    2007-01-01

    A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer

  12. A sensitivity analysis of the WIPP disposal room model: Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Labreche, D.A.; Beikmann, M.A. [RE/SPEC, Inc., Albuquerque, NM (United States); Osnes, J.D. [RE/SPEC, Inc., Rapid City, SD (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States)

    1995-07-01

    The WIPP Disposal Room Model (DRM) is a numerical model with three major components constitutive models of TRU waste, crushed salt backfill, and intact halite -- and several secondary components, including air gap elements, slidelines, and assumptions on symmetry and geometry. A sensitivity analysis of the Disposal Room Model was initiated on two of the three major components (waste and backfill models) and on several secondary components as a group. The immediate goal of this component sensitivity analysis (Phase I) was to sort (rank) model parameters in terms of their relative importance to model response so that a Monte Carlo analysis on a reduced set of DRM parameters could be performed under Phase II. The goal of the Phase II analysis will be to develop a probabilistic definition of a disposal room porosity surface (porosity, gas volume, time) that could be used in WIPP Performance Assessment analyses. This report documents a literature survey which quantifies the relative importance of the secondary room components to room closure, a differential analysis of the creep consolidation model and definition of a follow-up Monte Carlo analysis of the model, and an analysis and refitting of the waste component data on which a volumetric plasticity model of TRU drum waste is based. A summary, evaluation of progress, and recommendations for future work conclude the report.

  13. Experience modulates both aromatase activity and the sensitivity of agonistic behaviour to testosterone in black-headed gulls

    NARCIS (Netherlands)

    Ros, Albert F. H.; Franco, Aldina M. A.; Groothuis, Ton G. G.

    2009-01-01

    In young black-headed gulls (Larus ridibundus), exposure to testosterone increases the sensitivity of agonistic behaviour to a subsequent exposure to this hormone. The aim of this paper is twofold: to analyze whether social experience, gained during testosterone exposure, mediates this increase in

  14. Explicit modelling of SOA formation from α-pinene photooxidation: sensitivity to vapour pressure estimation

    Directory of Open Access Journals (Sweden)

    R. Valorso

    2011-07-01

    Full Text Available The sensitivity of the formation of secondary organic aerosol (SOA to the estimated vapour pressures of the condensable oxidation products is explored. A highly detailed reaction scheme was generated for α-pinene photooxidation using the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A. Vapour pressures (Pvap were estimated with three commonly used structure activity relationships. The values of Pvap were compared for the set of secondary species generated by GECKO-A to describe α-pinene oxidation. Discrepancies in the predicted vapour pressures were found to increase with the number of functional groups borne by the species. For semi-volatile organic compounds (i.e. organic species of interest for SOA formation, differences in the predicted Pvap range between a factor of 5 to 200 on average. The simulated SOA concentrations were compared to SOA observations in the Caltech chamber during three experiments performed under a range of NOx conditions. While the model captures the qualitative features of SOA formation for the chamber experiments, SOA concentrations are systematically overestimated. For the conditions simulated, the modelled SOA speciation appears to be rather insensitive to the Pvap estimation method.

  15. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    Science.gov (United States)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  16. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  17. Investigations of sensitivity and resolution of ECG and MCG in a realistically shaped thorax model

    International Nuclear Information System (INIS)

    Mäntynen, Ville; Konttila, Teijo; Stenroos, Matti

    2014-01-01

    Solving the inverse problem of electrocardiography (ECG) and magnetocardiography (MCG) is often referred to as cardiac source imaging. Spatial properties of ECG and MCG as imaging systems are, however, not well known. In this modelling study, we investigate the sensitivity and point-spread function (PSF) of ECG, MCG, and combined ECG+MCG as a function of source position and orientation, globally around the ventricles: signal topographies are modelled using a realistically-shaped volume conductor model, and the inverse problem is solved using a distributed source model and linear source estimation with minimal use of prior information. The results show that the sensitivity depends not only on the modality but also on the location and orientation of the source and that the sensitivity distribution is clearly reflected in the PSF. MCG can better characterize tangential anterior sources (with respect to the heart surface), while ECG excels with normally-oriented and posterior sources. Compared to either modality used alone, the sensitivity of combined ECG+MCG is less dependent on source orientation per source location, leading to better source estimates. Thus, for maximal sensitivity and optimal source estimation, the electric and magnetic measurements should be combined. (paper)

  18. Sensitivity analysis in the WWTP modelling community – new opportunities and applications

    DEFF Research Database (Denmark)

    Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.

    2010-01-01

    design (BSM1 plant layout) using Standardized Regression Coefficients (SRC) and (ii) Applying sensitivity analysis to help fine-tuning a fuzzy controller for a BNPR plant using Morris Screening. The results obtained from each case study are then critically discussed in view of practical applications......A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...

  19. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  20. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent

    2010-01-01

    We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...... is bootstrapped with a grasp density formed from visual cues. We show that the robot effectively applies its experience by downweighting poor grasp solutions, which results in increased success rates at subsequent learning cycles. We also present success rates in a practical scenario where a robot needs...

  1. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM experience at LOTUS

    International Nuclear Information System (INIS)

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1989-01-01

    In recent years, the LOTUS fusion blanket facility at IGA-EPF in Lausanne provided a series of irradiation experiments with the Lithium Blanket Module (LBM). The LBM has both realistic fusion blanket and materials and configuration. It is approximately an 80-cm cube, and the breeding material is Li 2 . Using as the D-T neutron source the Haefely Neutron Generator (HNG) with an intensity of about 5·10 12 n/s, a series of experiments with the bare LBM as well as with the LBM preceded by Pb, Be and ThO 2 multipliers were carried out. In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S n -transport code ONEDANT, the two-dimensional finite element S n -transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. For the nucleonic transport calculations, three 187-neutron-group libraries are presently available: MATXS8A and MATXS8F based on ENDF/B-V evaluations and MAT187 based on JEF/EFF evaluations. COVFILS-2, a 74-group library of neutron cross-sections, scattering matrices and covariances, is the data source for SENSIBL; the 74-group structure of COVFILS-2 is a subset of the Los Alamos 187-group structure. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed

  2. An analysis of sensitivity and uncertainty associated with the use of the HSPF model for EIA applications

    Energy Technology Data Exchange (ETDEWEB)

    Biftu, G.F.; Beersing, A.; Wu, S.; Ade, F. [Golder Associates, Calgary, AB (Canada)

    2005-07-01

    An outline of a new approach to assessing the sensitivity and uncertainty associated with surface water modelling results using Hydrological Simulation Program-Fortran (HSPF) was presented, as well as the results of a sensitivity and uncertainty analysis. The HSPF model is often used to characterize the hydrological processes in watersheds within the oil sands region. Typical applications of HSPF included calibration of the model parameters using data from gauged watersheds, as well as validation of calibrated models with data sets. Additionally, simulations are often conducted to make flow predictions to support the environmental impact assessment (EIA) process. However, a key aspect of the modelling components of the EIA process is the sensitivity and uncertainty of the modelling results as compared to model parameters. Many of the variations in the HSPF model's outputs are caused by a small number of model parameters and outputs. A sensitivity analysis was performed to identify and focus on key parameters and assumptions that have the most influence on the model's outputs. Analysis entailed varying each parameter in turn, within a range, and examining the resulting relative changes in the model outputs. This analysis consisted of the selection of probability distributions to characterize the uncertainty in the model's key sensitive parameters, as well as the use of Monte Carlo and HSPF simulation to determine the uncertainty in model outputs. tabs, figs.

  3. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Vinicius M. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Muratov, Eugene [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Laboratory of Theoretical Chemistry, A.V. Bogatsky Physical-Chemical Institute NAS of Ukraine, Odessa 65080 (Ukraine); Fourches, Denis [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Strickland, Judy; Kleinstreuer, Nicole [ILS/Contractor Supporting the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), P.O. Box 13501, Research Triangle Park, NC 27709 (United States); Andrade, Carolina H. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Tropsha, Alexander, E-mail: alex_tropsha@unc.edu [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States)

    2015-04-15

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative

  4. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    International Nuclear Information System (INIS)

    Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2015-01-01

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative

  5. Shielding benchmark experiments and sensitivity studies in progress at some European laboratories

    International Nuclear Information System (INIS)

    Hehn, G.; Mattes, M.; Matthes, W.; Nicks, R.; Rief, H.

    1975-01-01

    A 100 group standard library based on ENDF/B3 has been prepared by IKE and JRC. This library is used for the analysis of the current European and Japanese iron benchmark experiments. Further measurements are planned for checking the data sets for graphite, sodium and water. In a cooperation between the IKE and JRC groups coupled neutron-photon cross section sets will be produced. Point data are processed at IKE by the modular program system RSYST (CDC 6600) for elaborating the ENDFB data, whereas the JRC group, apart from using standard codes such as SUPERTOG 3, GAMLEG etc., has developed a series of auxiliary programs (IBM 360) for handling the DLC 2D and POPOP libraries and for producing the combined neutron-plus gamma library EL4 (119 groups). Sensitivity studies (in progress at IKE) make possible improvements in methods and optimization of calculation efforts for establishing group data. A tentative sensitivity study for a 3 dimensional MC approach is in progress at Ispra. As for nuclear data evaluation, the JRC group is calculating barium cross sections and their associated gamma spectra. 6 figures

  6. Microsystem with integrated capillary leak to mass spectrometer for high sensitivity temperature programmed desorption

    DEFF Research Database (Denmark)

    Quaade, Ulrich; Jensen, Søren; Hansen, Ole

    2004-01-01

    leak minimizes dead volumes in the system, resulting in increased sensitivity and reduced response time. These properties make the system ideal for TPD experiments in a carrier gas. With CO desorbing from platinum as model system, it is shown that CO desorbing in 105 Pa of argon from as little as 0.......5 cm2 of platinum foil gives a clear desorption peak. By using the microfabricated flow system, TPD experiments can be performed in a carrier gas with a sensitivity approaching that of TPD experiments in vacuum. ©2004 American Institute of Physics...

  7. Computer experiments with a coarse-grid hydrodynamic climate model

    International Nuclear Information System (INIS)

    Stenchikov, G.L.

    1990-01-01

    A climate model is developed on the basis of the two-level Mintz-Arakawa general circulation model of the atmosphere and a bulk model of the upper layer of the ocean. A detailed model of the spectral transport of shortwave and longwave radiation is used to investigate the radiative effects of greenhouse gases. The radiative fluxes are calculated at the boundaries of five layers, each with a pressure thickness of about 200 mb. The results of the climate sensitivity calculations for mean-annual and perpetual seasonal regimes are discussed. The CCAS (Computer Center of the Academy of Sciences) climate model is used to investigate the climatic effects of anthropogenic changes of the optical properties of the atmosphere due to increasing CO 2 content and aerosol pollution, and to calculate the sensitivity to changes of land surface albedo and humidity

  8. Acceleration sensitivity of micromachined pressure sensors

    Science.gov (United States)

    August, Richard; Maudie, Theresa; Miller, Todd F.; Thompson, Erik

    1999-08-01

    Pressure sensors serve a variety of automotive applications, some which may experience high levels of acceleration such as tire pressure monitoring. To design pressure sensors for high acceleration environments it is important to understand their sensitivity to acceleration especially if thick encapsulation layers are used to isolate the device from the hostile environment in which they reside. This paper describes a modeling approach to determine their sensitivity to acceleration that is very general and is applicable to different device designs and configurations. It also describes the results of device testing of a capacitive surface micromachined pressure sensor at constant acceleration levels from 500 to 2000 g's.

  9. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  10. Sensitivity analysis using two-dimensional models of the Whiteshell geosphere

    Energy Technology Data Exchange (ETDEWEB)

    Scheier, N. W.; Chan, T.; Stanchell, F. W.

    1992-12-01

    As part of the assessment of the environmental impact of disposing of immobilized nuclear fuel waste in a vault deep within plutonic rock, detailed modelling of groundwater flow, heat transport and containment transport through the geosphere is being performed using the MOTIF finite-element computer code. The first geosphere model is being developed using data from the Whiteshell Research Area, with a hypothetical disposal vault at a depth of 500 m. This report briefly describes the conceptual model and then describes in detail the two-dimensional simulations used to help initially define an adequate three-dimensional representation, select a suitable form for the simplified model to be used in the overall systems assessment with the SYVAC computer code, and perform some sensitivity analysis. The sensitivity analysis considers variations in the rock layer properties, variations in fracture zone configurations, the impact of grouting a vault/fracture zone intersection, and variations in boundary conditions. This study shows that the configuration of major fracture zones can have a major influence on groundwater flow patterns. The flows in the major fracture zones can have high velocities and large volumes. The proximity of the radionuclide source to a major fracture zone may strongly influence the time it takes for a radionuclide to be transported to the surface. (auth)

  11. Complementarity of WIMP Sensitivity with direct SUSY, Monojet and Dark Matter Searches in the MSSM

    CERN Document Server

    Arbey, Alexandre; Mahmoudi, Farvah

    2014-01-01

    This letter presents new results on the combined sensitivity of the LHC and underground dark matter search experiments to the lightest neutralino as WIMP candidate in the minimal Supersymmetric extension of the Standard Model. We show that monojet searches significantly extend the sensitivity to the neutralino mass in scenarios where scalar quarks are nearly degenerate in mass with it. The inclusion of the latest bound by the LUX experiment on the neutralino-nucleon spin-independent scattering cross section expands this sensitivity further, highlighting the remarkable complementarity of jets/$\\ell$s+MET and monojet at LHC and dark matter searches in probing models of new physics with a dark matter candidate. The qualitative results of our study remain valid after accounting for theoretical uncertainties.

  12. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  13. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    Science.gov (United States)

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  14. Danish heathland manipulation experiment data in Model-Data-Fusion

    Science.gov (United States)

    Thum, Tea; Peylin, Philippe; Ibrom, Andreas; Van Der Linden, Leon; Beier, Claus; Bacour, Cédric; Santaren, Diego; Ciais, Philippe

    2013-04-01

    In ecosystem manipulation experiments (EMEs) the ecosystem is artificially exposed to different environmental conditions that aim to simulate circumstances in future climate. At Danish EME site Brandbjerg the responses of a heathland to drought, warming and increased atmospheric CO2 concentration are studied. The warming manipulation is realized by passive nighttime warming. The measurements include control plots as well as replicates for each three treatment separately and in combination. The Brandbjerg heathland ecosystem is dominated by heather and wavy hairgrass. These experiments provide excellent data for validation and development of ecosystem models. In this work we used a generic vegetation model ORCHIDEE with Model-Data-Fusion (MDF) approach. ORCHIDEE model is a process-based model that describes the exchanges of carbon, water and energy between the atmosphere and the vegetation. It can be run at different spatial scales from global to site level. Different vegetation types are described in ORCHIDEE as plant functional types. In MDF we are using observations from the site to optimize the model parameters. This enables us to assess the modelling errors and the performance of the model for different manipulation treatments. This insight will inform us whether the different processes are adequately modelled or if the model is missing some important processes. We used a genetic algorithm in the MDF. The data available from the site included measurements of aboveground biomass, heterotrophic soil respiration and total ecosystem respiration from years 2006-2008. The biomass was measured six times doing this period. The respiration measurements were done with manual chamber measurements. For the soil respiration we used results from an empirical model that has been developed for the site. This enabled us to have more data for the MDF. Before the MDF we performed a sensitivity analysis of the model parameters to different data streams. Fifteen most influential

  15. Sensitivity analyses of the peach bottom turbine trip 2 experiment

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2003-01-01

    In the light of the sustained development in computer technology, the possibilities for code calculations in predicting more realistic transient scenarios in nuclear power plants have been enlarged substantially. Therefore, it becomes feasible to perform 'Best-estimate' simulations through the incorporation of three-dimensional modeling of reactor core into system codes. This method is particularly suited for complex transients that involve strong feedback effects between thermal-hydraulics and kinetics as well as to transient involving local asymmetric effects. The Peach bottom turbine trip test is characterized by a prompt core power excursion followed by a self limiting power behavior. To emphasize and understand the feedback mechanisms involved during this transient, a series of sensitivity analyses were carried out. This should allow the characterization of discrepancies between measured and calculated trends and assess the impact of the thermal-hydraulic and kinetic response of the used models. On the whole, the data comparison revealed a close dependency of the power excursion with the core feedback mechanisms. Thus for a better best estimate simulation of the transient, both of the thermal-hydraulic and the kinetic models should be made more accurate. (author)

  16. Quantification of remodeling parameter sensitivity - assessed by a computer simulation model

    DEFF Research Database (Denmark)

    Thomsen, J.S.; Mosekilde, Li.; Mosekilde, Erik

    1996-01-01

    We have used a computer simulation model to evaluate the effect of several bone remodeling parameters on vertebral cancellus bone. The menopause was chosen as the base case scenario, and the sensitivity of the model to the following parameters was investigated: activation frequency, formation bal....... However, the formation balance was responsible for the greater part of total mass loss....

  17. Sensitivity to plant modelling uncertainties in optimal feedback control of sound radiation from a panel

    DEFF Research Database (Denmark)

    Mørkholt, Jakob

    1997-01-01

    Optimal feedback control of broadband sound radiation from a rectangular baffled panel has been investigated through computer simulations. Special emphasis has been put on the sensitivity of the optimal feedback control to uncertainties in the modelling of the system under control.A model...... in terms of a set of radiation filters modelling the radiation dynamics.Linear quadratic feedback control applied to the panel in order to minimise the radiated sound power has then been simulated. The sensitivity of the model based controller to modelling uncertainties when using feedback from actual...

  18. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja; Navarro, Marí a; Merks, Roeland; Blom, Joke

    2015-01-01

    ) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided

  19. Application of perturbation theory to sensitivity calculations of PWR type reactor cores using the two-channel model

    International Nuclear Information System (INIS)

    Oliveira, A.C.J.G. de.

    1988-12-01

    Sensitivity calculations are very important in design and safety of nuclear reactor cores. Large codes with a great number of physical considerations have been used to perform sensitivity studies. However, these codes need long computation time involving high costs. The perturbation theory has constituted an efficient and economical method to perform sensitivity analysis. The present work is an application of the perturbation theory (matricial formalism) to a simplified model of DNB (Departure from Nucleate Boiling) analysis to perform sensitivity calculations in PWR cores. Expressions to calculate the sensitivity coefficients of enthalpy and coolant velocity with respect to coolant density and hot channel area were developed from the proposed model. The CASNUR.FOR code to evaluate these sensitivity coefficients was written in Fortran. The comparison between results obtained from the matricial formalism of perturbation theory with those obtained directly from the proposed model makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations (author). 23 refs, 4 figs, 7 tabs

  20. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    International Nuclear Information System (INIS)

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  1. Model-based global sensitivity analysis as applied to identification of anti-cancer drug targets and biomarkers of drug resistance in the ErbB2/3 network

    Science.gov (United States)

    Lebedeva, Galina; Sorokin, Anatoly; Faratian, Dana; Mullen, Peter; Goltsov, Alexey; Langdon, Simon P.; Harrison, David J.; Goryanin, Igor

    2012-01-01

    High levels of variability in cancer-related cellular signalling networks and a lack of parameter identifiability in large-scale network models hamper translation of the results of modelling studies into the process of anti-cancer drug development. Recently global sensitivity analysis (GSA) has been recognised as a useful technique, capable of addressing the uncertainty of the model parameters and generating valid predictions on parametric sensitivities. Here we propose a novel implementation of model-based GSA specially designed to explore how multi-parametric network perturbations affect signal propagation through cancer-related networks. We use area-under-the-curve for time course of changes in phosphorylation of proteins as a characteristic for sensitivity analysis and rank network parameters with regard to their impact on the level of key cancer-related outputs, separating strong inhibitory from stimulatory effects. This allows interpretation of the results in terms which can incorporate the effects of potential anti-cancer drugs on targets and the associated biological markers of cancer. To illustrate the method we applied it to an ErbB signalling network model and explored the sensitivity profile of its key model readout, phosphorylated Akt, in the absence and presence of the ErbB2 inhibitor pertuzumab. The method successfully identified the parameters associated with elevation or suppression of Akt phosphorylation in the ErbB2/3 network. From analysis and comparison of the sensitivity profiles of pAkt in the absence and presence of targeted drugs we derived predictions of drug targets, cancer-related biomarkers and generated hypotheses for combinatorial therapy. Several key predictions have been confirmed in experiments using human ovarian carcinoma cell lines. We also compared GSA-derived predictions with the results of local sensitivity analysis and discuss the applicability of both methods. We propose that the developed GSA procedure can serve as a

  2. Sensitivity of using blunt and sharp crack models in elastic-plastic fracture mechanics

    International Nuclear Information System (INIS)

    Pan, Y.C.; Kennedy, J.M.; Marchertas, A.H.

    1985-01-01

    J-integral values are calculated for both the blunt (smeared) crack and the sharp (discrete) crack models in elastic-plastic fracture mechanics problems involving metallic materials. A sensitivity study is performed to show the relative strengths and weaknesses of the two cracking models. It is concluded that the blunt crack model is less dependent on the orientation of the mesh. For the mesh which is in line with the crack direction, however, the sharp crack model is less sensitive to the mesh size. Both models yield reasonable results for a properly discretized finite-element mesh. A subcycling technique is used in this study in the explicit integration scheme so that large time steps can be used for the coarse elements away from the crack tip. The savings of computation time by this technique are reported. 6 refs., 9 figs

  3. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10 %) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which

  4. Sensitivity analysis on a dose-calculation model for the terrestrial food-chain pathway

    International Nuclear Information System (INIS)

    Abdel-Aal, M.M.

    1994-01-01

    Parameter uncertainty and sensitivity were applied to the U.S. Regulatory Commission's (NRC) Regulatory Guide 1.109 (1977) models for calculating the ingestion dose via a terrestrial food-chain pathway in order to assess the transport of chronically released, low-level effluents from light-water reactors. In the analysis, we used the generation of latin hypercube samples (LHS) and employed a constrained sampling scheme. The generation of these samples is based on information supplied to the LHS program for variables or parameters. The actually sampled values are used to form vectors of variables that are commonly used as inputs to computer models for the purpose of sensitivity and uncertainty analysis. Regulatory models consider the concentrations of radionuclides that are deposited on plant tissues or lead to root uptake of nuclides initially deposited on soil. We also consider concentrations in milk and beef as a consequence of grazing on contaminated pasture or ingestion of contaminated feed by dairy and beef cattle. The radionuclides Sr-90 and Cs-137 were selected for evaluation. The most sensitive input parameters for the model were the ground-dispersion parameter, release rates of radionuclides, and soil-to-plant transfer coefficients of radionuclides. (Author)

  5. On the sensitivity of mesoscale models to surface-layer parameterization constants

    Science.gov (United States)

    Garratt, J. R.; Pielke, R. A.

    1989-09-01

    The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.

  6. Sensitivity to cocaine in adult mice is due to interplay between genetic makeup, early environment and later experience.

    Science.gov (United States)

    Di Segni, Matteo; Andolina, Diego; Coassin, Alessandra; Accoto, Alessandra; Luchetti, Alessandra; Pascucci, Tiziana; Luzi, Carla; Lizzi, Anna Rita; D'Amato, Francesca R; Ventura, Rossella

    2017-10-01

    Although early aversive postnatal events are known to increase the risk to develop psychiatric disorders later in life, rarely they determine alone the nature and outcome of the psychopathology, indicating that interaction with genetic factors is crucial for expression of psychopathologies in adulthood. Moreover, it has been suggested that early life experiences could have negative consequences or confer adaptive value in different individuals. Here we suggest that resilience or vulnerability to adult cocaine sensitivity depends on a "triple interaction" between genetic makeup x early environment x later experience. We have recently showed that Repeated Cross Fostering (RCF; RCF pups were fostered by four adoptive mothers from postnatal day 1 to postnatal day 4. Pups were left with the last adoptive mother until weaning) experienced by pups affected the response to a negative experience in adulthood in opposite direction in two genotypes leading DBA2/J, but not C57BL/6J mice, toward an "anhedonia-like" phenotype. Here we investigate whether exposure to a rewarding stimulus, instead of a negative one, in adulthood induces an opposite behavioral outcome. To test this hypothesis, we investigated the long-lasting effects of RCF on cocaine sensitivity in C57 and DBA female mice by evaluating conditioned place preference induced by different cocaine doses and catecholamine prefrontal-accumbal response to cocaine using a "dual probe" in vivo microdialysis procedure. Moreover, cocaine-induced c-Fos activity was assessed in different brain regions involved in processing of rewarding stimuli. Finally, cocaine-induced spine changes were evaluated in the prefrontal-accumbal system. RCF experience strongly affected the behavioral, neurochemical and morphological responses to cocaine in adulthood in opposite direction in the two genotypes increasing and reducing, respectively, the sensitivity to cocaine in C57 and DBA mice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    Science.gov (United States)

    Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2015-01-01

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674

  8. Sensitivity of Miniaturized Photo-elastic Transducer for Small Force Sensing

    Directory of Open Access Journals (Sweden)

    Naceur-Eddine KHELIFA

    2015-01-01

    Full Text Available The sensitivity of a force sensor based on photo-elastic effect in a monolithic Nd- YAG laser depends strongly on the geometrical shape and dimensions of the laser medium. The theoretical predictions of sensitivity are in good agreement with first results obtained with a plano- concave cylindrical crystal of (4´4 mm and some values reported by other groups. However, for small size of the laser sensor, the developed model predicts sensitivity, about 30 % higher than the values given by available experiments. In this paper, we present experimental results obtained with a force sensor using a miniaturized monolithic cylindrical Nd-YAG laser of dimensions (2´3 mm with suitable optical coatings on its plane end faces. The new result of measurement concerning the sensitivity has allowed us to refine the theoretical model to treat photo-elastic force sensors with small dimensions.

  9. Designing Experiments to Discriminate Families of Logic Models.

    Science.gov (United States)

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  10. Large scale FCI experiments in subassembly geometry. Test facility and model experiments

    International Nuclear Information System (INIS)

    Beutel, H.; Gast, K.

    A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975

  11. Design of experiments for identification of complex biochemical systems with applications to mitochondrial bioenergetics.

    Science.gov (United States)

    Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K

    2009-01-01

    Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.

  12. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    Science.gov (United States)

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  13. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    Science.gov (United States)

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  14. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.; Held, Isaac M.; Stenchikov, Georgiy L.; Zeng, Fanrong; Horowitz, Larry W.

    2014-01-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  15. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.

    2014-10-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  16. Sensitivity analysis for thermo-hydraulics model of a Westinghouse type PWR. Verification of the simulation results

    Energy Technology Data Exchange (ETDEWEB)

    Farahani, Aref Zarnooshe [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Nuclear Engineering, Science and Research Branch; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club

    2017-07-15

    Development of a steady-state model is the first step in nuclear safety analysis. The developed model should be qualitatively analyzed first, then a sensitivity analysis is required on the number of nodes for models of different systems to ensure the reliability of the obtained results. This contribution aims to show through sensitivity analysis, the independence of modeling results to the number of nodes in a qualified MELCOR model for a Westinghouse type pressurized power plant. For this purpose, and to minimize user error, the nuclear analysis software, SNAP, is employed. Different sensitivity cases were developed by modification of the existing model and refinement of the nodes for the simulated systems including steam generators, reactor coolant system and also reactor core and its connecting flow paths. By comparing the obtained results to those of the original model no significant difference is observed which is indicative of the model independence to the finer nodes.

  17. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  18. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model.

    Science.gov (United States)

    Bolster, Carl H; Vadas, Peter A

    2013-07-01

    Models are often used to predict phosphorus (P) loss from agricultural fields. Although it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study we assessed the effect of model input error on predictions of annual P loss by the Annual P Loss Estimator (APLE) model. Our objectives were (i) to conduct a sensitivity analyses for all APLE input variables to determine which variables the model is most sensitive to, (ii) to determine whether the relatively easy-to-implement first-order approximation (FOA) method provides accurate estimates of model prediction uncertainties by comparing results with the more accurate Monte Carlo simulation (MCS) method, and (iii) to evaluate the performance of the APLE model against measured P loss data when uncertainties in model predictions and measured data are included. Our results showed that for low to moderate uncertainties in APLE input variables, the FOA method yields reasonable estimates of model prediction uncertainties, although for cases where manure solid content is between 14 and 17%, the FOA method may not be as accurate as the MCS method due to a discontinuity in the manure P loss component of APLE at a manure solid content of 15%. The estimated uncertainties in APLE predictions based on assumed errors in the input variables ranged from ±2 to 64% of the predicted value. Results from this study highlight the importance of including reasonable estimates of model uncertainty when using models to predict P loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  19. Modeling the radical chemistry in an oxidation flow reactor: radical formation and recycling, sensitivities, and the OH exposure estimation equation.

    Science.gov (United States)

    Li, Rui; Palm, Brett B; Ortega, Amber M; Hlywiak, James; Hu, Weiwei; Peng, Zhe; Day, Douglas A; Knote, Christoph; Brune, William H; de Gouw, Joost A; Jimenez, Jose L

    2015-05-14

    Oxidation flow reactors (OFRs) containing low-pressure mercury (Hg) lamps that emit UV light at both 185 and 254 nm ("OFR185") to generate OH radicals and O3 are used in many areas of atmospheric science and in pollution control devices. The widely used potential aerosol mass (PAM) OFR was designed for studies on the formation and oxidation of secondary organic aerosols (SOA), allowing for a wide range of oxidant exposures and short experiment duration with reduced wall loss effects. Although fundamental photochemical and kinetic data applicable to these reactors are available, the radical chemistry and its sensitivities have not been modeled in detail before; thus, experimental verification of our understanding of this chemistry has been very limited. To better understand the chemistry in the OFR185, a model has been developed to simulate the formation, recycling, and destruction of radicals and to allow the quantification of OH exposure (OHexp) in the reactor and its sensitivities. The model outputs of OHexp were evaluated against laboratory calibration experiments by estimating OHexp from trace gas removal and were shown to agree within a factor of 2. A sensitivity study was performed to characterize the dependence of the OHexp, HO2/OH ratio, and O3 and H2O2 output concentrations on reactor parameters. OHexp is strongly affected by the UV photon flux, absolute humidity, reactor residence time, and the OH reactivity (OHR) of the sampled air, and more weakly by pressure and temperature. OHexp can be strongly suppressed by high OHR, especially under low UV light conditions. A OHexp estimation equation as a function of easily measurable quantities was shown to reproduce model results within 10% (average absolute value of the relative errors) over the whole operating range of the reactor. OHexp from the estimation equation was compared with measurements in several field campaigns and shows agreement within a factor of 3. The improved understanding of the OFR185 and

  20. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)

  1. Dark matter at the SHiP experiment

    International Nuclear Information System (INIS)

    Timiryasov, Inar

    2016-01-01

    We study prospects of dark matter searches in the SHiP experiment. SHiP (Search for Hidden Particles) is the recently proposed fixed target experiment which will exploit the high-intensity beam of 400 GeV protons from the CERN SPS. In addition to the hidden sector detector, SHiP will be equipped with the ν_τ detector, which presumably would be sensitive to dark matter particles. We describe appropriate production and detection channels and estimate SHiP’s sensitivity for a scalar dark matter coupled to the Standard model through the vector mediator

  2. Adverse social experiences in adolescent rats result in enduring effects on social competence, pain sensitivity and endocannabinoid signaling

    Directory of Open Access Journals (Sweden)

    Peggy Schneider

    2016-10-01

    Full Text Available Social affiliation is essential for many species and gains significant importance during adolescence. Disturbances in social affiliation, in particular social rejection experiences during adolescence, affect an individual’s well-being and are involved in the emergence of psychiatric disorders. The underlying mechanisms are still unknown, partly because of a lack of valid animal models. By using a novel animal model for social peer-rejection, which compromises adolescent rats in their ability to appropriately engage in playful activities, here we report on persistent impairments in social behavior and dysregulations in the endocannabinoid system. From postnatal day (pd 21 to pd 50 adolescent female Wistar rats were either reared with same-strain partners (control or within a group of Fischer 344 rats (inadequate social rearing, ISR, previously shown to serve as inadequate play partners for the Wistar strain. Adult ISR animals showed pronounced deficits in social interaction, social memory, processing of socially transmitted information, and decreased pain sensitivity. Molecular analysis revealed increased CB1 receptor protein levels and CP55,940 stimulated 35SGTPγS binding activity specifically in the amygdala and thalamus in previously peer-rejected rats. Along with these changes, increased levels of the endocannabinoid anandamide and a corresponding decrease of its degrading enzyme fatty acid amide hydrolase were seen in the amygdala. Our data indicate lasting consequences in social behavior and pain sensitivity following peer-rejection in adolescent female rats. These behavioral impairments are accompanied by persistent alterations in CB1 receptor signaling. Finally, we provide a novel translational approach to characterize neurobiological processes underlying social peer-rejection in adolescence.

  3. 4D-Fingerprint Categorical QSAR Models for Skin Sensitization Based on Classification Local Lymph Node Assay Measures

    Science.gov (United States)

    Li, Yi; Tseng, Yufeng J.; Pan, Dahua; Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Hopfinger, Anton J.

    2008-01-01

    Currently, the only validated methods to identify skin sensitization effects are in vivo models, such as the Local Lymph Node Assay (LLNA) and guinea pig studies. There is a tremendous need, in particular due to novel legislation, to develop animal alternatives, eg. Quantitative Structure-Activity Relationship (QSAR) models. Here, QSAR models for skin sensitization using LLNA data have been constructed. The descriptors used to generate these models are derived from the 4D-molecular similarity paradigm and are referred to as universal 4D-fingerprints. A training set of 132 structurally diverse compounds and a test set of 15 structurally diverse compounds were used in this study. The statistical methodologies used to build the models are logistic regression (LR), and partial least square coupled logistic regression (PLS-LR), which prove to be effective tools for studying skin sensitization measures expressed in the two categorical terms of sensitizer and non-sensitizer. QSAR models with low values of the Hosmer-Lemeshow goodness-of-fit statistic, χHL2, are significant and predictive. For the training set, the cross-validated prediction accuracy of the logistic regression models ranges from 77.3% to 78.0%, while that of PLS-logistic regression models ranges from 87.1% to 89.4%. For the test set, the prediction accuracy of logistic regression models ranges from 80.0%-86.7%, while that of PLS-logistic regression models ranges from 73.3%-80.0%. The QSAR models are made up of 4D-fingerprints related to aromatic atoms, hydrogen bond acceptors and negatively partially charged atoms. PMID:17226934

  4. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  5. Predicted infiltration for sodic/saline soils from reclaimed coastal areas: sensitivity to model parameters.

    Science.gov (United States)

    Liu, Dongdong; She, Dongli; Yu, Shuang'en; Shao, Guangcheng; Chen, Dan

    2014-01-01

    This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline) and 1960 (Soil B, nonsaline) were used, with bulk densities of 1.4 or 1.5 g/cm(3). A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ₀ was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  6. Predicted Infiltration for Sodic/Saline Soils from Reclaimed Coastal Areas: Sensitivity to Model Parameters

    Directory of Open Access Journals (Sweden)

    Dongdong Liu

    2014-01-01

    Full Text Available This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline and 1960 (Soil B, nonsaline were used, with bulk densities of 1.4 or 1.5 g/cm3. A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ0 was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  7. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  8. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  9. Technical Note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

    Directory of Open Access Journals (Sweden)

    J. D. Herman

    2013-07-01

    Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.

  10. TF insert experiment log book. 2nd Experiment of CS model coil

    International Nuclear Information System (INIS)

    Sugimoto, Makoto; Isono, Takaaki; Matsui, Kunihiro

    2001-12-01

    The cool down of CS model coil and TF insert was started on August 20, 2001. It took almost one month and immediately started coil charge since September 17, 2001. The charge test of TF insert and CS model coil was completed on October 19, 2001. In this campaign, total shot numbers were 88 and the size of the data file in the DAS (Data Acquisition System) was about 4 GB. This report is a database that consists of the log list and the log sheets of every shot. This is an experiment logbook for 2nd experiment of CS model coil and TF insert for charge test. (author)

  11. Modelling of laboratory high-pressure infiltration experiments

    International Nuclear Information System (INIS)

    Smith, P.A.

    1992-02-01

    This report describes the modelling of break-through curves from a series of two-tracer dynamic infiltration experiments, which are intended to complement larger scale experiments at the Nagra Grimsel Test Site. The tracers are 82 Br, which is expected to be non-sorbing, and 24 Na, which is weakly sorbing. The 24 Na concentration is well below the natural Na concentration in the infiltration fluid, so that sorption on the rock is governed by isotopic exchange, exhibiting a linear isotherm. The rock specimens are sub-samples (cores) of granodiorite from the Grimsel Test Site, each containing a distinct shear zone. Best-fits to the break-through curves using single-porosity and dual-porosity transport models are compared and several physical parameters are extracted. It is shown that the dual-porosity model is required in order to reproduce the tailing part of the break-through curves for the non-sorbing tracer. The single-porosity model is sufficient to reproduce the break-through curves for the sorbing tracer within the estimated experimental errors. Extracted K d values are shown to agree well with a field rock-water interaction experiment and in situ migration experiments. Static, laboratory batch-sorption experiments give a larger K d , but this difference could be explained by the larger surface area available for sorption in the artificially crushed samples used in the laboratory and by a slightly different water chemistry. (author) 13 figs., tabs., 19 refs

  12. High temperature shock tube experiments and kinetic modeling study of diisopropyl ketone ignition and pyrolysis

    KAUST Repository

    Barari, Ghazal; Pryor, Owen; Koroglu, Batikan; Sarathy, Mani; Masunov, Artë m E.; Vasu, Subith S.

    2017-01-01

    Diisopropyl ketone (DIPK) is a promising biofuel candidate, which is produced using endophytic fungal conversion. In this work, a high temperature detailed combustion kinetic model for DIPK was developed using the reaction class approach. DIPK ignition and pyrolysis experiments were performed using the UCF shock tube. The shock tube oxidation experiments were conducted between 1093K and 1630K for different reactant compositions, equivalence ratios (φ=0.5–2.0), and pressures (1–6atm). In addition, methane concentration time-histories were measured during 2% DIPK pyrolysis in argon using cw laser absorption near 3400nm at temperatures between 1300 and 1400K near 1atm. To the best of our knowledge, current ignition delay times (above 1050K) and methane time histories are the first such experiments performed in DIPK at high temperatures. Present data were used as validation targets for the new kinetic model and simulation results showed fair agreement compared to the experiments. The reaction rates corresponding to the main consumption pathways of DIPK were found to have high sensitivity in controlling the reactivity, so these were adjusted to attain better agreement between the simulation and experimental data. A correlation was developed based on the experimental data to predict the ignition delay times using the temperature, pressure, fuel concentration and oxygen concentration.

  13. High temperature shock tube experiments and kinetic modeling study of diisopropyl ketone ignition and pyrolysis

    KAUST Repository

    Barari, Ghazal

    2017-03-10

    Diisopropyl ketone (DIPK) is a promising biofuel candidate, which is produced using endophytic fungal conversion. In this work, a high temperature detailed combustion kinetic model for DIPK was developed using the reaction class approach. DIPK ignition and pyrolysis experiments were performed using the UCF shock tube. The shock tube oxidation experiments were conducted between 1093K and 1630K for different reactant compositions, equivalence ratios (φ=0.5–2.0), and pressures (1–6atm). In addition, methane concentration time-histories were measured during 2% DIPK pyrolysis in argon using cw laser absorption near 3400nm at temperatures between 1300 and 1400K near 1atm. To the best of our knowledge, current ignition delay times (above 1050K) and methane time histories are the first such experiments performed in DIPK at high temperatures. Present data were used as validation targets for the new kinetic model and simulation results showed fair agreement compared to the experiments. The reaction rates corresponding to the main consumption pathways of DIPK were found to have high sensitivity in controlling the reactivity, so these were adjusted to attain better agreement between the simulation and experimental data. A correlation was developed based on the experimental data to predict the ignition delay times using the temperature, pressure, fuel concentration and oxygen concentration.

  14. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  15. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  16. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    Science.gov (United States)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  17. In Situ Experiment and Numerical Model Validation of a Borehole Heat Exchanger in Shallow Hard Crystalline Rock

    Directory of Open Access Journals (Sweden)

    Mateusz Janiszewski

    2018-04-01

    Full Text Available Accurate and fast numerical modelling of the borehole heat exchanger (BHE is required for simulation of long-term thermal energy storage in rocks using boreholes. The goal of this study was to conduct an in situ experiment to validate the proposed numerical modelling approach. In the experiment, hot water was circulated for 21 days through a single U-tube BHE installed in an underground research tunnel located at a shallow depth in crystalline rock. The results of the simulations using the proposed model were validated against the measurements. The numerical model simulated the BHE’s behaviour accurately and compared well with two other modelling approaches from the literature. The model is capable of replicating the complex geometrical arrangement of the BHE and is considered to be more appropriate for simulations of BHE systems with complex geometries. The results of the sensitivity analysis of the proposed model have shown that low thermal conductivity, high density, and high heat capacity of rock are essential for maximising the storage efficiency of a borehole thermal energy storage system. Other characteristics of BHEs, such as a high thermal conductivity of the grout, a large radius of the pipe, and a large distance between the pipes, are also preferred for maximising efficiency.

  18. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  19. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Predicting community sensitivity to ozone, using Ellenberg Indicator values

    Energy Technology Data Exchange (ETDEWEB)

    Jones, M. Laurence M. [Centre for Ecology and Hydrology Bangor, Orton Building, Deiniol Road, Bangor, Gwynedd LL57 2UP (United Kingdom)]. E-mail: lj@ceh.ac.uk; Hayes, Felicity [Centre for Ecology and Hydrology Bangor, Orton Building, Deiniol Road, Bangor, Gwynedd LL57 2UP (United Kingdom)]. E-mail: fhay@ceh.ac.uk; Mills, Gina [Centre for Ecology and Hydrology Bangor, Orton Building, Deiniol Road, Bangor, Gwynedd LL57 2UP (United Kingdom)]. E-mail: gmi@ceh.ac.uk; Sparks, Tim H. [Centre for Ecology and Hydrology Monks Wood, Abbots Ripton, Huntingdon, Cambridgeshire PE28 2LS (United Kingdom)]. E-mail: ths@ceh.ac.uk; Fuhrer, Juerg [Swiss Federal Research Station for Agroecology and Agriculture (FAL), Air Pollution/Climate Group, Reckenholzstrasse 191, CH-8046 Zurich (Switzerland)]. E-mail: juerg.fuhrer@fal.admin.ch

    2007-04-15

    This paper develops a regression-based model for predicting changes in biomass of individual species exposed to ozone (RS{sub p}), based on their Ellenberg Indicator values. The equation (RS{sub p}=1.805-0.118Light-0.135Salinity) underpredicts observed sensitivity but has the advantage of widespread applicability to almost 3000 European species. The model was applied to grassland communities to develop two further predictive tools. The first tool, percentage change in biomass (ORI%) was tested on data from a field-based ozone exposure experiment and predicted a 27% decrease in biomass over 5 years compared with an observed decrease of 23%. The second tool, an index of community sensitivity to ozone (CORI), was applied to 48 grassland communities and suggests that community sensitivity to ozone is primarily species-driven. A repeat-sampling routine showed that nine species were the minimum requirement to estimate CORI within 5%.

  1. Managing a sensitive project

    International Nuclear Information System (INIS)

    Etcheber, Pascal

    1998-01-01

    A 'sensitive' project needs to be managed differently from a 'normal' project. This statement might seem simple enough. However, it does not seem to be a simple task to prove it in twenty minutes. This paper is an attempt to share with the audience some of the experiences the company had dealing with sensitive projects. It describes what a sensitive project is, though of all people, the 'nuclear' should know. Then the common mistakes are described, that are made in the hoping that some personal experiences are recognised. Finally the company's strategy is shown, how we foster third party support and the main tools to be used. Ultimately, success is ensured by having a sufficient quantity of allies. A sensitive project does not die because it has too many opponents, but because it has too few allies. Finding and helping allies to act is the thrust of our activity. It enables sensitive projects which deserve to succeed to do so, where traditional management fails miserably

  2. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe...

  3. A new approach and computational algorithm for sensitivity/uncertainty analysis for SED and SAD with applications to beryllium integral experiments

    International Nuclear Information System (INIS)

    Song, P.M.; Youssef, M.Z.; Abdou, M.A.

    1993-01-01

    A new approach for treating the sensitivity and uncertainty in the secondary energy distribution (SED) and the secondary angular distribution (SAD) has been developed, and the existing two-dimensional sensitivity/uncertainty analysis code, FORSS, was expanded to incorporate the new approach. The calculational algorithm was applied to the 9 Be(n,2n) cross section to study the effect of the current uncertainties in the SED and SAD of neutrons emitted from this reaction on the prediction accuracy of the tritium production rate from 6 Li(T 6 ) and 7 Li(T 7 ) in an engineering-oriented fusion integral experiment of the US Department of Energy/Japan Atomic Energy Research Institute Collaborative Program on Fusion Neutronics in which beryllium was used as a neutron multiplier. In addition, the analysis was extended to include the uncertainties in the integrated smooth cross sections of beryllium and other materials that constituted the test assembly used in the experiment. This comprehensive two-dimensional cross-section sensitivity/uncertainty analysis aimed at identifying the sources of discrepancies between calculated and measured values for T 6 and T 7

  4. An investigation of the sensitivity of a land surface model to climate change using a reduced form model

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, A.H.; McIlwaine, S. [PAOS/CIRES, Univ. of Colorado, Boulder, CO (United States); Beringer, J. [Inst. of Arctic Biology, Univ. of Alaska, Fairbanks (United States); Bonan, G.B. [National Center for Atmospheric Research, Boulder, CO (United States)

    2001-05-01

    In an illustration of a model evaluation methodology, a multivariate reduced form model is developed to evaluate the sensitivity of a land surface model to changes in atmospheric forcing. The reduced form model is constructed in terms of a set of ten integrative response metrics, including the timing of spring snow melt, sensible and latent heat fluxes in summer, and soil temperature. The responses are evaluated as a function of a selected set of six atmospheric forcing perturbations which are varied simultaneously, and hence each may be thought of as a six-dimensional response surface. The sensitivities of the land surface model are interdependent and in some cases illustrate a physically plausible feedback process. The important predictors of land surface response in a changing climate are the atmospheric temperature and downwelling longwave radiation. Scenarios characterized by warming and drying produce a large relative response compared to warm, moist scenarios. The insensitivity of the model to increases in precipitation and atmospheric humidity is expected to change in applications to coupled models, since these parameters are also strongly implicated, through the representation of clouds, in the simulation of both longwave and shortwave radiation. (orig.)

  5. Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet.

    Science.gov (United States)

    Camps-Bossacoma, Mariona; Pérez-Cano, Francisco J; Franch, Àngels; Castell, Margarida

    2017-01-01

    Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.

  6. Forecasting hypoxia in the Chesapeake Bay and Gulf of Mexico: model accuracy, precision, and sensitivity to ecosystem change

    International Nuclear Information System (INIS)

    Evans, Mary Anne; Scavia, Donald

    2011-01-01

    Increasing use of ecological models for management and policy requires robust evaluation of model precision, accuracy, and sensitivity to ecosystem change. We conducted such an evaluation of hypoxia models for the northern Gulf of Mexico and Chesapeake Bay using hindcasts of historical data, comparing several approaches to model calibration. For both systems we find that model sensitivity and precision can be optimized and model accuracy maintained within reasonable bounds by calibrating the model to relatively short, recent 3 year datasets. Model accuracy was higher for Chesapeake Bay than for the Gulf of Mexico, potentially indicating the greater importance of unmodeled processes in the latter system. Retrospective analyses demonstrate both directional and variable changes in sensitivity of hypoxia to nutrient loads.

  7. A survey of cross-section sensitivity analysis as applied to radiation shielding

    International Nuclear Information System (INIS)

    Goldstein, H.

    1977-01-01

    Cross section sensitivity studies revolve around finding the change in the value of an integral quantity, e.g. transmitted dose, for a given change in one of the cross sections. A review is given of the principal methodologies for obtaining the sensitivity profiles-principally direct calculations with altered cross sections, and linear perturbation theory. Some of the varied applications of cross section sensitivity analysis are described, including the practice, of questionable value, of adjusting input cross section data sets so as to provide agreement with integral experiments. Finally, a plea is made for using cross section sensitivity analysis as a powerful tool for analysing the transport mechanisms of particles in radiation shields and for constructing models of how cross section phenomena affect the transport. Cross section sensitivities in the shielding area have proved to be highly problem-dependent. Without the understanding afforded by such models, it is impossible to extrapolate the conclusions of cross section sensitivity analysis beyond the narrow limits of the specific situations examined in detail. Some of the elements that might be of use in developing the qualitative models are presented. (orig.) [de

  8. High degree gravitational sensitivity from Mars orbiters for the GMM-1 gravity model

    Science.gov (United States)

    Lerch, F. J.; Smith, D. E.; Chan, J. C.; Patel, G. B.; Chinn, D. S.

    1994-01-01

    Orbital sensitivity of the gravity field for high degree terms (greater than 30) is analyzed on satellites employed in a Goddard Mars Model GMM-1, complete in spherical harmonics through degree and order 50. The model is obtained from S-band Doppler data on Mariner 9 (M9), Viking Orbiter 1 (VO1), and Viking Orbiter 2 (VO2) spacecraft, which were tracked by the NASA Deep Space Network on seven different highly eccentric orbits. The main sensitivity of the high degree terms is obtained from the VO1 and VO2 low orbits (300 km periapsis altitude), where significant spectral sensitivity is seen for all degrees out through degree 50. The velocity perturbations show a dominant effect at periapsis and significant effects out beyond the semi-latus rectum covering over 180 degrees of the orbital groundtrack for the low altitude orbits. Because of the wideband of periapsis motion covering nearly 180 degrees in w and +39 degrees in latitude coverage, the VO1 300 km periapsis altitude orbit with inclination of 39 degrees gave the dominant sensitivity in the GMM-1 solution for the high degree terms. Although the VO2 low periapsis orbit has a smaller band of periapsis mapping coverage, it strongly complements the VO1 orbit sensitivity for the GMM-1 solution with Doppler tracking coverage over a different inclination of 80 degrees.

  9. Gamma ray sensitivity of superheated liquid

    International Nuclear Information System (INIS)

    Sawamura, Teruko; Sugiyama, Noriyuki; Narita, Masakuni

    2000-01-01

    The superheated drop detector (SDD) is composed of droplets of sensitive liquid with a low-boiling point and a medium supporting the dispersed droplets throughout the medium. The SDD has been mainly used for neutron dosimetry and recently also for gamma-rays. While for neutrons the conditions for bubble formation have been discussed, there has been little work for gamma-rays. We investigated the conditions for low LET radiation, such as protons and gamma-rays, and showed octafluoropropane (C 3 F 8 , boiling point -36.7degC) as advantageous liquid. The bubble formation condition is given by the energy density imparted from the charged particle to the sensitive liquid. The energy density requirement means that the energy must be deposited over a definite region length, effective to produce the vapor nucleus that becomes the visible bubble. Recently for γ-rays, Evans and Wang proposed the model that the vaporization was triggered by the energy deposition in a 'cluster' including many events in proximity in a superheated liquid. Measurements of the γ-ray sensitivity have not been sufficiently carried out and therefore the effective length or the cluster model has not been well-established. In this study the detection sensitivity was evaluated by measuring the life time of a liquid drop exposed to γ-rays. We developed a device trapping a superheated drop, where a single drop of test liquid was trapped and decompressed by an acoustic standing wave field. When a liquid drop with volume V[cm 3 ] is exposed to a γ-ray flux φ γ [cm -2 s -1 ], the average evaporation rate λ(T, P) [s -1 ] (T: temperature, P: decompressed pressure) is expressed as λ(T, P)=K γ Vφ γ (1), K γ [cm -1 ] is the γ-ray detection sensitivity per unit volume of the sensitive liquid and unit fluence. If the average rate of spontaneous evaporation is λ 0 (T, P), then the probability distribution of the life time t, the probability that t > τ, is expressed by X(τ)=exp{-(λ+λ 0 )

  10. Geostatistical and adjoint sensitivity techniques applied to a conceptual model of ground-water flow in the Paradox Basin, Utah

    International Nuclear Information System (INIS)

    Metcalfe, D.E.; Campbell, J.E.; RamaRao, B.S.; Harper, W.V.; Battelle Project Management Div., Columbus, OH)

    1985-01-01

    Sensitivity and uncertainty analysis are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of geostatistical and adjoint sensitivity techniques to aid in the calibration of an existing conceptual model of ground-water flow is demonstrated for the Leadville Limestone in Paradox Basin, Utah. The geostatistical method called kriging is used to statistically analyze the measured potentiometric data for the Leadville. This analysis consists of identifying anomalous data and data trends and characterizing the correlation structure between data points. Adjoint sensitivity analysis is then performed to aid in the calibration of a conceptual model of ground-water flow to the Leadville measured potentiometric data. Sensitivity derivatives of the fit between the modeled Leadville potentiometric surface and the measured potentiometric data to model parameters and boundary conditions are calculated by the adjoint method. These sensitivity derivatives are used to determine which model parameter and boundary condition values should be modified to most efficiently improve the fit of modeled to measured potentiometric conditions

  11. Uncovering the influence of social skills and psychosociological factors on pain sensitivity using structural equation modeling.

    Science.gov (United States)

    Tanaka, Yoichi; Nishi, Yuki; Nishi, Yuki; Osumi, Michihiro; Morioka, Shu

    2017-01-01

    Pain is a subjective emotional experience that is influenced by psychosociological factors such as social skills, which are defined as problem-solving abilities in social interactions. This study aimed to reveal the relationships among pain, social skills, and other psychosociological factors by using structural equation modeling. A total of 101 healthy volunteers (41 men and 60 women; mean age: 36.6±12.7 years) participated in this study. To evoke participants' sense of inner pain, we showed them images of painful scenes on a PC screen and asked them to evaluate the pain intensity by using the visual analog scale (VAS). We examined the correlation between social skills and VAS, constructed a hypothetical model based on results from previous studies and the current correlational analysis results, and verified the model's fit using structural equation modeling. We found significant positive correlations between VAS and total social skills values, as well as between VAS and the "start of relationships" subscales. Structural equation modeling revealed that the values for "start of relationships" had a direct effect on VAS values (path coefficient =0.32, p social support. The results indicated that extroverted people are more sensitive to inner pain and tend to get more social support and maintain a better psychological condition.

  12. Hypersonic Separated Flows About "Tick" Configurations With Sensitivity to Model Design

    Science.gov (United States)

    Moss, J. N.; O'Byrne, S.; Gai, S. L.

    2014-01-01

    This paper presents computational results obtained by applying the direct simulation Monte Carlo (DSMC) method for hypersonic nonequilibrium flow about "tick-shaped" model configurations. These test models produces a complex flow where the nonequilibrium and rarefied aspects of the flow are initially enhanced as the flow passes over an expansion surface, and then the flow encounters a compression surface that can induce flow separation. The resulting flow is such that meaningful numerical simulations must have the capability to account for a significant range of rarefaction effects; hence the application of the DSMC method in the current study as the flow spans several flow regimes, including transitional, slip, and continuum. The current focus is to examine the sensitivity of both the model surface response (heating, friction and pressure) and flowfield structure to assumptions regarding surface boundary conditions and more extensively the impact of model design as influenced by leading edge configuration as well as the geometrical features of the expansion and compression surfaces. Numerical results indicate a strong sensitivity to both the extent of the leading edge sharpness and the magnitude of the leading edge bevel angle. Also, the length of the expansion surface for a fixed compression surface has a significant impact on the extent of separated flow.

  13. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    Science.gov (United States)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  14. Modeling Drift Compression in an Integrated Beam Experiment for Heavy-Ion-Fusion

    Science.gov (United States)

    Sharp, W. M.; Barnard, J. J.; Friedman, A.; Grote, D. P.; Celata, C. M.; Yu, S. S.

    2003-10-01

    The Integrated Beam Experiment (IBX) is an induction accelerator being designed to further develop the science base for heavy-ion fusion. The experiment is being developed jointly by Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Princeton Plasma Physics Laboratory. One conceptual approach would first accelerate a 0.5-1 A beam of singly charged potassium ions to 5 MeV, impose a head-to-tail velocity tilt to compress the beam longitudinally, and finally focus the beam radiallly using a series of quadrupole lenses. The lengthwise compression is a critical step because the radial size must be controlled as the current increases, and the beam emittance must be kept minimal. The work reported here first uses the moment-based model HERMES to design the drift-compression beam line and to assess the sensitivity of the final beam profile to beam and lattice errors. The particle-in-cell code WARP is then used to validate the physics design, study the phase-space evolution, and quantify the emittance growth.

  15. Position sensitive detection coupled to high-resolution time-of-flight mass spectrometry: Imaging for molecular beam deflection experiments

    International Nuclear Information System (INIS)

    Abd El Rahim, M.; Antoine, R.; Arnaud, L.; Barbaire, M.; Broyer, M.; Clavier, Ch.; Compagnon, I.; Dugourd, Ph.; Maurelli, J.; Rayane, D.

    2004-01-01

    We have developed and tested a high-resolution time-of-flight mass spectrometer coupled to a position sensitive detector for molecular beam deflection experiments. The major achievement of this new spectrometer is to provide a three-dimensional imaging (X and Y positions and time-of-flight) of the ion packet on the detector, with a high acquisition rate and a high resolution on both the mass and the position. The calibration of the experimental setup and its application to molecular beam deflection experiments are discussed

  16. SENSITIVITY ANALYSIS OF BIOME-BGC MODEL FOR DRY TROPICAL FORESTS OF VINDHYAN HIGHLANDS, INDIA

    OpenAIRE

    M. Kumar; A. S. Raghubanshi

    2012-01-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to...

  17. Modeling the Nab Experiment Electronics in SPICE

    Science.gov (United States)

    Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.

  18. Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2

    International Nuclear Information System (INIS)

    Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.

    1985-01-01

    This paper summarizes a demonstration uncertainty/sensitivity analysis performed on the reactor accident consequence model CRAC2. The study was performed with uncertainty/sensitivity analysis techniques compiled as part of the MELCOR program. The principal objectives of the study were: 1) to demonstrate the use of the uncertainty/sensitivity analysis techniques on a health and economic consequence model, 2) to test the computer models which implement the techniques, 3) to identify possible difficulties in performing such an analysis, and 4) to explore alternative means of analyzing, displaying, and describing the results. Demonstration of the applicability of the techniques was the motivation for performing this study; thus, the results should not be taken as a definitive uncertainty analysis of health and economic consequences. Nevertheless, significant insights on health and economic consequence analysis can be drawn from the results of this type of study. Latin hypercube sampling (LHS), a modified Monte Carlo technique, was used in this study. LHS generates a multivariate input structure in which all the variables of interest are varied simultaneously and desired correlations between variables are preserved. LHS has been shown to produce estimates of output distribution functions that are comparable with results of larger random samples

  19. The MØLLER experiment at Jefferson Lab: search for physics beyond the Standard Model

    Science.gov (United States)

    van Oers, Willem T. H.

    2010-07-01

    The MO/LLER experiment at Jefferson Lab will measure the parity-violating analyzing power Az in the scattering of 11 GeV longitudinally polarized electrons from the atomic electrons in a liquid hydrogen target (Mo/ller scattering). In the Standard Model a non-zero Az is due to the interference of the electromagnetic amplitude and the weak neutral current amplitude, the latter mediated by the Z0 boson. Az is predicted to be 35.6 parts per billion (ppb) at the kinematics of the experiment. It is the objective of the experiment to measure Az to a precision of 0.73 ppb. This result would yield a measurement of the weak charge of the electron QWe to a fractional error of 2.3% at an average value Q2 of 0.0056 (GeV/c)2. This in turn will yield a determination of the weak mixing angle sin2θw with an uncertainty of ±0.00026(stat) ±0.00013(syst), comparable to the accuracy of the two best determinations at high energy colliders (at the Z0 pole). Consequently, the result could potentially influence the central value of this fundamental electroweak parameter, which is of critical importance in deciphering any signal of new physics that might be observed at the Large Hadron Collider (LHC). The measurement is sensitive to the interference of the electromagnetic amplitude with new neutral current amplitudes as weak as 10-3 GF from as yet unknown high energy dynamics, a level of sensitivity unlikely to be matched in any experiment measuring a flavor and CP conserving process in the next decade. This provides indirect access to new physics at multi-TeV scales in a manner complementary to direct searches at the LHC.

  20. Assessing flood risk at the global scale: model setup, results, and sensitivity

    International Nuclear Information System (INIS)

    Ward, Philip J; Jongman, Brenden; Weiland, Frederiek Sperna; Winsemius, Hessel C; Bouwman, Arno; Ligtvoet, Willem; Van Beek, Rens; Bierkens, Marc F P

    2013-01-01

    Globally, economic losses from flooding exceeded $19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP ($1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures. (letter)