WorldWideScience

Sample records for model sensitivity experiments

  1. Numerical modeling of shock-sensitivity experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, A.L.; Forest, C.A.; Kershner, J.D.; Mader, C.L.; Pimbley, G.H.

    1981-01-01

    The Forest Fire rate model of shock initiation of heterogeneous explosives has been used to study several experiments commonly performed to measure the sensitivity of explosives to shock and to study initiation by explosive-formed jets. The minimum priming charge test, the gap test, the shotgun test, sympathetic detonation, and jet initiation have been modeled numerically using the Forest Fire rate in the reactive hydrodynamic codes SIN and 2DE.

  2. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  3. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  4. Factor Structure of Early Smoking Experiences and Associations with Smoking Behavior: Valence or Sensitivity Model?

    Directory of Open Access Journals (Sweden)

    Stéphanie Baggio

    2013-11-01

    Full Text Available The Early Smoking Experience (ESE questionnaire is the most widely used questionnaire to assess initial subjective experiences of cigarette smoking. However, its factor structure is not clearly defined and can be perceived from two main standpoints: valence, or positive and negative experiences, and sensitivity to nicotine. This article explores the ESE’s factor structure and determines which standpoint was more relevant. It compares two groups of young Swiss men (German- and French-speaking. We examined baseline data on 3,368 tobacco users from a representative sample in the ongoing Cohort Study on Substance Use Risk Factors (C-SURF. ESE, continued tobacco use, weekly smoking and nicotine dependence were assessed. Exploratory structural equation modeling (ESEM and structural equation modeling (SEM were performed. ESEM clearly distinguished positive experiences from negative experiences, but negative experiences were divided in experiences related to dizziness and experiences related to irritations. SEM underlined the reinforcing effects of positive experiences, but also of experiences related to dizziness on nicotine dependence and weekly smoking. The best ESE structure for predictive accuracy of experiences on smoking behavior was a compromise between the valence and sensitivity standpoints, which showed clinical relevance.

  5. 12th Rencontres du Vietnam : High Sensitivity Experiments Beyond the Standard Model

    CERN Document Server

    2016-01-01

    The goal of this workshop is to gather researchers, theoreticians, experimentalists and young scientists searching for physics beyond the Standard Model of particle physics using high sensitivity experiments. The standard model has been very successful in describing the particle physics world; the Higgs-Englert-Brout boson discovery is its last major discovery. Complementary to the high energy frontier explored at colliders, real opportunities for discovery exist at the precision frontier, testing fundamental symmetries and tracking small SM deviations.

  6. Snow and ice on Bear Lake (Alaska – sensitivity experiments with two lake ice models

    Directory of Open Access Journals (Sweden)

    Tido Semmler

    2012-03-01

    Full Text Available Snow and ice thermodynamics of Bear Lake (Alaska are investigated with a simple freshwater lake model (FLake and a more complex snow and ice thermodynamic model (HIGHTSI. A number of sensitivity experiments have been carried out to investigate the influence of snow and ice parameters and of different complexity on the results. Simulation results are compared with observations from the Alaska Lake Ice and Snow Observatory Network. Adaptations of snow thermal and optical properties in FLake can largely improve accuracy of the results. Snow-to-ice transformation is important for HIGHTSI to calculate the total ice mass balance. The seasonal maximum ice depth is simulated in FLake with a bias of −0.04 m and in HIGHTSI with no bias. Correlation coefficients between ice depth measurements and simulations are high (0.74 for FLake and 0.9 for HIGHTSI. The snow depth simulation can be improved by taking into account a variable snow density. Correlation coefficients for surface temperature are 0.72 for FLake and 0.81 for HIGHTSI. Overall, HIGHTSI gives slightly more accurate surface temperature than FLake probably due to the consideration of multiple snow and ice layers and the expensive iteration calculation procedure.

  7. On the Juno radio science experiment: models, algorithms and sensitivity analysis

    Science.gov (United States)

    Tommei, G.; Dimare, L.; Serra, D.; Milani, A.

    2015-01-01

    Juno is a NASA mission launched in 2011 with the goal of studying Jupiter. The probe will arrive to the planet in 2016 and will be placed for one year in a polar high-eccentric orbit to study the composition of the planet, the gravity and the magnetic field. The Italian Space Agency (ASI) provided the radio science instrument KaT (Ka-Band Translator) used for the gravity experiment, which has the goal of studying the Jupiter's deep structure by mapping the planet's gravity: such instrument takes advantage of synergies with a similar tool in development for BepiColombo, the ESA cornerstone mission to Mercury. The Celestial Mechanics Group of the University of Pisa, being part of the Juno Italian team, is developing an orbit determination and parameters estimation software for processing the real data independently from NASA software ODP. This paper has a twofold goal: first, to tell about the development of this software highlighting the models used, secondly, to perform a sensitivity analysis on the parameters of interest to the mission.

  8. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  9. Sensitivity experiments of a regional climate model to the different convective schemes over Central Africa

    Science.gov (United States)

    Armand J, K. M.

    2017-12-01

    In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.

  10. Capacitive pressure-sensitive composites using nickel-silicone rubber: experiments and modeling

    Science.gov (United States)

    Fan, Yuqin; Liao, Changrong; Liao, Ganliang; Tan, Renbing; Xie, Lei

    2017-07-01

    Capacitive pressure (i.e., piezo-capacitive) sensors have manifested their superiority as a potential electronic skin. The mechanism of the traditional piezo-capacitive sensors is mainly to change the relative permittivity of the flexible composites by compressing the specially fabricated microstructures in the polymer matrix under pressure. Instead, we study the piezo-capacitive effect for a newly reported isotropic flexible composite consisting of silicone rubber (SR) and uniformly dispersed micron-sized conductive nickel particles experimentally and theoretically. The Young’s modulus of the nickel-SR composites (NSRCs) is designed to meet that of human skin. Experimental results show that the NSRCs exhibit remarkable particle concentration dependent capacitance response under uniaxial pressure, and the NSRCs present a good repeatability. We propose a mathematical model at particle level to provide deep insights into the piezo-capacitive mechanism, by considering the adjacent particles in the axial direction as micro capacitors connected in series and in parallel on the horizontal plane. The piezo-capacitive effect is determined by the relative permittivity induced by the particles rearrangement, longitudinal interparticle gap, and deflection angle of micro particle capacitors under pressure. Specifically, the relative capacitance of NSRC capacitor is deduced to be product of two factors: the degree of particle rearrangement, and the relative capacitance of a micro capacitor with the average longitudinal gap. The proposed model well matches and interprets the experimental results.

  11. Sensitivity of the Humboldt current system to global warming: a downscaling experiment of the IPSL-CM4 model

    Energy Technology Data Exchange (ETDEWEB)

    Echevin, Vincent [LOCEAN, Paris (France); Goubanova, Katerina; Dewitte, Boris [LEGOS, Toulouse (France); IMARPE, IGP, LEGOS, Lima (Peru); Belmadani, Ali [LOCEAN, Paris (France); LEGOS, Toulouse (France); University of Hawaii at Manoa, IPRC, International Pacific Research Center, SOEST, Honolulu, Hawaii (United States)

    2012-02-15

    The impact of climate warming on the seasonal variability of the Humboldt Current system ocean dynamics is investigated. The IPSL-CM4 large scale ocean circulation resulting from two contrasted climate scenarios, the so-called Preindustrial and quadrupling CO{sub 2}, are downscaled using an eddy-resolving regional ocean circulation model. The intense surface heating by the atmosphere in the quadrupling CO{sub 2} scenario leads to a strong increase of the surface density stratification, a thinner coastal jet, an enhanced Peru-Chile undercurrent, and an intensification of nearshore turbulence. Upwelling rates respond quasi-linearly to the change in wind stress associated with anthropogenic forcing, and show a moderate decrease in summer off Peru and a strong increase off Chile. Results from sensitivity experiments show that a 50% wind stress increase does not compensate for the surface warming resulting from heat flux forcing and that the associated mesoscale turbulence increase is a robust feature. (orig.)

  12. Cloud/climate sensitivity experiments

    Science.gov (United States)

    Roads, J. O.; Vallis, G. K.; Remer, L.

    1982-01-01

    A study of the relationships between large-scale cloud fields and large scale circulation patterns is presented. The basic tool is a multi-level numerical model comprising conservation equations for temperature, water vapor and cloud water and appropriate parameterizations for evaporation, condensation, precipitation and radiative feedbacks. Incorporating an equation for cloud water in a large-scale model is somewhat novel and allows the formation and advection of clouds to be treated explicitly. The model is run on a two-dimensional, vertical-horizontal grid with constant winds. It is shown that cloud cover increases with decreased eddy vertical velocity, decreased horizontal advection, decreased atmospheric temperature, increased surface temperature, and decreased precipitation efficiency. The cloud field is found to be well correlated with the relative humidity field except at the highest levels. When radiative feedbacks are incorporated and the temperature increased by increasing CO2 content, cloud amounts decrease at upper-levels or equivalently cloud top height falls. This reduces the temperature response, especially at upper levels, compared with an experiment in which cloud cover is fixed.

  13. Context Sensitive Modeling of Cancer Drug Sensitivity.

    Directory of Open Access Journals (Sweden)

    Bo-Juen Chen

    Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.

  14. Thermal Sensitive Foils in Physics Experiments

    Science.gov (United States)

    Bochnícek, Zdenek; Konecný, Pavel

    2014-01-01

    The paper describes a set of physics demonstration experiments where thermal sensitive foils are used for the detection of the two dimensional distribution of temperature. The method is used for the demonstration of thermal conductivity, temperature change in adiabatic processes, distribution of electromagnetic radiation in a microwave oven and…

  15. Interaction of Sea Breeze and Deep Convection over the Northeastern Adriatic Coast: An Analysis of Sensitivity Experiments Using a High-Resolution Mesoscale Model

    Science.gov (United States)

    Kehler-Poljak, Gabrijela; Telišman Prtenjak, Maja; Kvakić, Marko; Šariri, Kristina; Večenaj, Željko

    2017-11-01

    This study investigates the sensitivity of a high-resolution mesoscale atmospheric model in the model reproduction of thermally induced local wind (i.e., sea breezes, SB) on the development of deep convection (Cb). The three chosen cases are simulated by the Weather and Research Forecasting (WRF-ARW) model at three (nested) model domains, whereas the area of the interest is Istria (peninsula in the northeastern Adriatic). The sensitivity tests are accomplished by modifying (1) the model setup, (2) the model topography and (3) the sea surface temperature (SST) distribution. The first set of simulations (over the three 1.5-day periods during summer) is conducted by modifying the model setup, i.e., microphysics and the boundary layer parameterizations. The same events are simulated with the modified topography where the mountain heights in Istria are reduced to 30% of their initial height. The SST distribution has two representations in the model: a constant SST field from the ECMWF skin temperature analysis and a varying SST field, which is provided by hourly geostationary satellite data. A comprehensive set of numerical experiments is statistically analyzed through several different approaches (i.e., the standard statistical measures, the spectral method and the image moment analysis). The overall model evaluation of each model setup revealed certain advantages of one model setup over the others. The numerical tests with the modified topography showed the influence of reducing the mountains heights on the pre-thunderstorm characteristics due to: (1) decrease of sensible heat flux and mid-tropospheric moisture and (2) change of slope-SB wind system. They consequently affect the evolution and dimensions of SBs and the features of the thunderstorm itself: timing, location and intensity (weaker storm). The implementation of the varying SST field in the model have an impact on the characteristics and dynamics of the SB and finally on the accuracy of Cb evolution

  16. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  17. The Sensitivity of Atmospheric Dispersion Calculations in Near-field Applications: Modeling of the Full Scale RDD Experiments with Operational Models in Canada, Part I.

    Science.gov (United States)

    Lebel, Luke; Bourgouin, Pierre; Chouhan, Sohan; Ek, Nils; Korolevych, Volodymyr; Malo, Alain; Bensimon, Dov; Erhardt, Lorne

    2016-05-01

    Three radiological dispersal devices were detonated in 2012 under controlled conditions at Defence Research and Development Canada's Experimental Proving Grounds in Suffield, Alberta. Each device comprised a 35-GBq source of (140)La. The dataset obtained is used in this study to assess the MLCD, ADDAM, and RIMPUFF atmospheric dispersion models. As part one of a two-part study, this paper focuses on examining the capabilities of the above three models and evaluating how well their predictions of air concentration and ground deposition match observations from the full-scale RDD experiments.

  18. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  19. Prediction of Individual Muscle Forces Using Lagrange Multipliers Method - A Model of the Upper Human Limb in the Sagittal Plane: II. Numerical Experiments and Sensitivity Analysis.

    Science.gov (United States)

    Raikova, ROSITSA

    2000-01-01

    Using the method of Lagrange multipliers an analytical solution of the optimization problem formulated for a two-dimensional, 3DOF model of the human upper limb has been described in Part I of this investigation. The objective criterion used is the following: [formula: see text], where F(i) -s are the muscle forces modelled and c(i) -s are unknown weight factors. This study is devoted to the numerical experiments performed in order to investigate which sets of the weight factors may predict physiologically reasonable muscle forces and joint reactions. A sensitivity analysis is also presented. The influence of: the gravity forces, different external loads applied to the hand, changes of the weight factors and of joint angle on the optimal solution is studied. A general conclusion may be drawn: using the above mentioned objective criterion, practically all motor tasks performed by the human upper limb may be described if the c(i) -s are properly chosen. These weight factors generally depend on the joint moments and must be different (their magnitudes as well as their signs) for agonistic muscles and for their antagonists.

  20. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  1. Neutrino Oscillation Parameter Sensitivity in Future Long-Baseline Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bass, Matthew [Colorado State Univ., Fort Collins, CO (United States)

    2014-01-01

    The study of neutrino interactions and propagation has produced evidence for physics beyond the standard model and promises to continue to shed light on rare phenomena. Since the discovery of neutrino oscillations in the late 1990s there have been rapid advances in establishing the three flavor paradigm of neutrino oscillations. The 2012 discovery of a large value for the last unmeasured missing angle has opened the way for future experiments to search for charge-parity symmetry violation in the lepton sector. This thesis presents an analysis of the future sensitivity to neutrino oscillations in the three flavor paradigm for the T2K, NO A, LBNE, and T2HK experiments. The theory of the three flavor paradigm is explained and the methods to use these theoretical predictions to design long baseline neutrino experiments are described. The sensitivity to the oscillation parameters for each experiment is presented with a particular focus on the search for CP violation and the measurement of the neutrino mass hierarchy. The variations of these sensitivities with statistical considerations and experimental design optimizations taken into account are explored. The effects of systematic uncertainties in the neutrino flux, interaction, and detection predictions are also considered by incorporating more advanced simulations inputs from the LBNE experiment.

  2. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  3. Structural sensitivity of biological models revisited.

    Science.gov (United States)

    Cordoleani, Flora; Flora, Cordoleani; Nerini, David; David, Nerini; Gauduchon, Mathias; Mathias, Gauduchon; Morozov, Andrew; Andrew, Morozov; Poggiale, Jean-Christophe; Jean-Christophe, Poggiale

    2011-08-21

    Enhancing the predictive power of models in biology is a challenging issue. Among the major difficulties impeding model development and implementation are the sensitivity of outcomes to variations in model parameters, the problem of choosing of particular expressions for the parametrization of functional relations, and difficulties in validating models using laboratory data and/or field observations. In this paper, we revisit the phenomenon which is referred to as structural sensitivity of a model. Structural sensitivity arises as a result of the interplay between sensitivity of model outcomes to variations in parameters and sensitivity to the choice of model functions, and this can be somewhat of a bottleneck in improving the models predictive power. We provide a rigorous definition of structural sensitivity and we show how we can quantify the degree of sensitivity of a model based on the Hausdorff distance concept. We propose a simple semi-analytical test of structural sensitivity in an ODE modeling framework. Furthermore, we emphasize the importance of directly linking the variability of field/experimental data and model predictions, and we demonstrate a way of assessing the robustness of modeling predictions with respect to data sampling variability. As an insightful illustrative example, we test our sensitivity analysis methods on a chemostat predator-prey model, where we use laboratory data on the feeding of protozoa to parameterize the predator functional response. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  5. Sensitivity of the ATLAS Experiment to Extra Dimensions

    CERN Document Server

    Gough Eschrich, I

    2006-01-01

    In the late nineties several authors suggested that the extra dimensions predicted by string theory might lead to observable effects at high energy colliders. The ATLAS experiment which will start taking data at the LHC in 2007 will be an excellent place to search for such effects. A large set of models within the ADD or the Randall Sundrum geometries has been studied in ATLAS. These models predict a variety of signatures: jets and missing energy from direct graviton production, high mass tails in dilepton and diphoton production due to virtual graviton exchange, production of Kaluza-Klein excitations of standard model particles, etc. The sensitivity of ATLAS to these signatures will be presented.

  6. Biosphere assessment for high-level radioactive waste disposal: modelling experiences and discussion on key parameters by sensitivity analysis in JNC

    International Nuclear Information System (INIS)

    Kato, Tomoko; Makino, Hitoshi; Uchida, Masahiro; Suzuki, Yuji

    2004-01-01

    In the safety assessment of the deep geological disposal system of the high-level radioactive waste (HLW), biosphere assessment is often necessary to estimate future radiological impacts on human beings (e.g. radiation dose). In order to estimate the dose, the surface environment (biosphere) into which future releases of radionuclides might occur and the associated future human behaviour needs to be considered. However, for a deep repository, such releases might not occur for many thousands of years after disposal. Over such timescales, it is impossible to predict with any certainty how the biosphere and human behaviour will evolve. To avoid endless speculation aimed at reducing such uncertainty, the 'Reference Biospheres' concept has been developed for use in the safety assessment of HLW disposal. As the aim of the safety assessment with a hypothetical HLW disposal system by JNC was to demonstrate the technical feasibility and reliability of the Japanese disposal concept for a range of geological and surface environments, some biosphere models were developed using the 'Reference Biospheres' concept and the BIOMASS Methodology. These models have been used to derive factors to convert the radionuclide flux from a geosphere to a biosphere into a dose (flux to dose conversion factors). Moreover, sensitivity analysis for parameters in the biosphere models was performed to evaluate and understand the relative importance of parameters. It was concluded that transport parameters in the surface environments, annual amount of food consumption, distribution coefficients on soils and sediments, transfer coefficients of radionuclides to animal products and concentration ratios for marine organisms would have larger influence on the flux to dose conversion factors than any other parameters. (author)

  7. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  8. Battlescale Forecast Model Sensitivity Study

    National Research Council Canada - National Science Library

    Sauter, Barbara

    2003-01-01

    .... Changes to the surface observations used in the Battlescale Forecast Model initialization led to no significant changes in the resulting forecast values of temperature, relative humidity, wind speed, or wind direction...

  9. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...... detection and pushing error detection to earlier stages of development. The complexity of modeling and the size of systems which can be analyzed is severely limited when introducing data variables. The state space grows exponentially in the number of variable and the domain size of the variables...

  10. Tsunami propagation modelling – a sensitivity study

    Directory of Open Access Journals (Sweden)

    P. Tkalich

    2007-12-01

    Full Text Available Indian Ocean (2004 Tsunami and following tragic consequences demonstrated lack of relevant experience and preparedness among involved coastal nations. After the event, scientific and forecasting circles of affected countries have started a capacity building to tackle similar problems in the future. Different approaches have been used for tsunami propagation, such as Boussinesq and Nonlinear Shallow Water Equations (NSWE. These approximations were obtained assuming different relevant importance of nonlinear, dispersion and spatial gradient variation phenomena and terms. The paper describes further development of original TUNAMI-N2 model to take into account additional phenomena: astronomic tide, sea bottom friction, dispersion, Coriolis force, and spherical curvature. The code is modified to be suitable for operational forecasting, and the resulting version (TUNAMI-N2-NUS is verified using test cases, results of other models, and real case scenarios. Using the 2004 Tsunami event as one of the scenarios, the paper examines sensitivity of numerical solutions to variation of different phenomena and parameters, and the results are analyzed and ranked accordingly.

  11. Sensitivity-Based Guided Model Calibration

    Science.gov (United States)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  12. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  13. Interoceptive Sensitivity and Self-Reports of Emotional Experience

    OpenAIRE

    Barrett, Lisa Feldman; Quigley, Karen S.; Bliss-Moreau, Eliza; Aronson, Keith R.

    2004-01-01

    People differ in the extent to which they emphasize feelings of activation or deactivation in their verbal reports of experienced emotion, termed arousal focus (AF). Two multimethod studies indicate that AF is linked to heightened interoceptive sensitivity (as measured by performance on a heartbeat detection task). People who were more sensitive to their heartbeats emphasized feelings of activation and deactivation when reporting their experiences of emotion over time more than did those who ...

  14. the sensitivity of evapotranspiration models to errors in model ...

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –. Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was ...

  15. Quick, sensitive serial NMR experiments with Radon transform

    Science.gov (United States)

    Dass, Rupashree; Kasprzak, Paweł; Kazimierczuk, Krzysztof

    2017-09-01

    The Radon transform is a potentially powerful tool for processing the data from serial spectroscopic experiments. It makes it possible to decode the rate at which frequencies of spectral peaks shift under the effect of changing conditions, such as temperature, pH, or solvent. In this paper we show how it also improves speed and sensitivity, especially in multidimensional experiments. This is particularly important in the case of low-sensitivity techniques, such as NMR spectroscopy. As an example, we demonstrate how Radon transform processing allows serial measurements of 15N -HSQC spectra of unlabelled peptides that would otherwise be infeasible.

  16. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  17. Superconducting gravity gradiometer for sensitive gravity measurements. II. Experiment

    International Nuclear Information System (INIS)

    Chan, H.A.; Moody, M.V.; Paik, H.J.

    1987-01-01

    A sensitive superconducting gravity gradiometer has been constructed and tested. Coupling to gravity signals is obtained by having two superconducting proof masses modulate magnetic fields produced by persistent currents. The induced electrical currents are differenced by a passive superconducting circuit coupled to a superconducting quantum interference device. The experimental behavior of this device has been shown to follow the theoretical model closely in both signal transfer and noise characteristics. While its intrinsic noise level is shown to be 0.07 E Hz/sup -1/2/ (1 Eequivalent10/sup -9/ sec/sup -2/), the actual performance of the gravity gradiometer on a passive platform has been limited to 0.3--0.7 E Hz/sup -1/2/ due to its coupling to the environmental noise. The detailed structure of this excess noise is understood in terms of an analytical error model of the instrument. The calibration of the gradiometer has been obtained by two independent methods: by applying a linear acceleration and a gravity signal in two different operational modes of the instrument. This device has been successfully operated as a detector in a new null experiment for the gravitational inverse-square law. In this paper we report the design, fabrication, and detailed test results of the superconducting gravity gradiometer. We also present additional theoretical analyses which predict the specific dynamic behavior of the gradiometer and of the test

  18. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2018-01-01

    to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according......Product developers using life cycle toxicity characterization models to understand the potential impacts of chemical emissions face serious challenges related to large data demands and high input data uncertainty. This motivates greater focus on model sensitivity toward input parameter variability...... to parameter influence on characterization factors (CFs). Proof of concept is illustrated with the UNEP-SETAC scientific consensus model USEtox....

  19. Sensitivity of weather besed irrigation scheduling model

    International Nuclear Information System (INIS)

    Laghari, K.Q.; Lashari, B.K.; Laghari, N.U.Z.

    2009-01-01

    This study describes the sensitivity of irrigation scheduling model (Mehran) carried out by changing input weather parameters (Temperatures, Wind velocity, Rainfall, and Sunshine hours) to see model sensitivity in computation/estimations (output) for Transpiration (T), Evaporation (E), and allocation of irrigation (I) water. Sensitivity analysis depends on the site and environmental conditions and is therefore an essential step in model validation and application. Mehran Model is weather based crop growth simulation model, which uses daily input data of max and min temperatures (temp), dew point temp (humidity), wind speed, daily sunshine hours (radiation) and computes T/sub c/E/sub s/, and allocates Irrigation accordingly. The input and output base values are taken as an average of three years actual field data used during the Mehran Model testing and calibration on wheat and cotton crops. The model sensitivity of specific input parameter was obtained by varying its value and keeping other input parameters at their base values. The input base values varied by+-10 and +-25%. The model was run for each modified input parameter, and output was compared statistically with base outputs. The ME% (Mean Percent Error) was used to obtain variations in output values. The results reveal that the model is most sensitive with variations in temperature. The 10 and 25% increase in temperature resulted increase in Cotton crop's Tc by 12.18 and 28.54%, corresponding Es by 22.32 and 37.88% and irrigation water allocation by 18.41 and 47.83 % respectively increased from average base values. (author)

  20. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  1. Projected sensitivity of the SuperCDMS SNOLAB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Agnese, R.; Anderson, A. J.; Aramaki, T.; Arnquist, I.; Baker, W.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Borgland, A.; Bowles, M. A.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Calkins, R.; Cartaro, C.; Cerdeño, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Gerbier, G.; Ghaith, M.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hong, Z.; Hoppe, E.; Hsu, L.; Huber, M. E.; Iyer, V.; Jardin, D.; Jastram, A.; Kelsey, M. H.; Kennedy, A.; Kubik, A.; Kurinsky, N. A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; Mandic, V.; Mast, N.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Orrell, J. L.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Poudel, S.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Roberts, A.; Robinson, A. E.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Serfass, B.; Speller, D.; Stein, M.; Street, J.; Tanaka, H. A.; Toback, D.; Underwood, R.; Villano, A. N.; von Krosigk, B.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, X.; Zhao, X.

    2017-04-07

    SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass (< 10 GeV/c$^2$) particles that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~ 1 x 10$^{-43}$ cm$^2$ for a dark matter particle mass of 1 GeV/c$^2$, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced $^{3}$H and naturally occurring $^{32}$Si will be present in the detectors at some level. Even if these backgrounds are x10 higher than expected, the science reach of the HV detectors would be over three orders of magnitude beyond current results for a dark matter mass of 1 GeV/c$^2$. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particle masses (> 5 GeV/c$^2$). The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the "neutrino floor", where coherent scatters of solar neutrinos become a limiting background.

  2. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  3. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a single first order ordinary differential equation. In this analysis, we will study the sensitivity analysis due to a variation of the initial condition ...

  4. Model dependence of isospin sensitive observables at high densities

    International Nuclear Information System (INIS)

    Guo, Wen-Mei; Yong, Gao-Chan; Wang, Yongjia; Li, Qingfeng; Zhang, Hongfei; Zuo, Wei

    2013-01-01

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π − /π + ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π − /π + ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically

  5. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  6. Screening design for model sensitivity studies

    Science.gov (United States)

    Welsh, James P.; Koenig, George G.; Bruce, Dorothy

    1997-07-01

    This paper describes a different approach to sensitivity studies for environmental, including atmospheric, physics models. The sensitivity studies were performed on a multispectral environmental data and scene generation capability. The capability includes environmental physics models that are used to generate data and scenes for simulation of environmental materials, features, and conditions, such as trees, clouds, soils, and snow. These studies were performed because it is difficult to obtain input data for many of the environmental variables. The problem to solve is to determine which of the 100 or so input variables, used by the generation capability, are the most important. These sensitivity studies focused on the generation capabilities needed to predict and evaluate the performance of sensor systems operating in the infrared portions of the electromagnetic spectrum. The sensitivity study approach described uses a screening design. Screening designs are analytical techniques that use a fraction of all of the combinations of the potential input variables and conditions to determine which are the most important. Specifically a 20-run Plackett-Burman screening design was used to study the sensitivity of eight data and scene generation capability computed response variables to 11 selected input variables. This is a two-level design, meaning that the range of conditions is represented by two different values for each of the 11 selected variables. The eight response variables used were maximum, minimum, range, and mode of the model-generated temperature and radiance values. The result is that six of the 11 input variables (soil moisture, solar loading, roughness length, relative humidity, surface albedo, and surface emissivity) had a statistically significant effect on the response variables.

  7. Interest Rate Sensitivity of Savings: The Ghanaian Experience ...

    African Journals Online (AJOL)

    The study sets out to examine the sensitivity of savings to interest rate in Ghana during the period 1970 to 2006. In line with this objective, a theoretical model of savings function was specified with real interest rate as the main independent variable. Other independent variables included real income, inflation rate and real ...

  8. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  9. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  10. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  11. Projected sensitivity of the SuperCDMS SNOLAB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Agnese, R.; Anderson, A. J.; Aramaki, T.; Arnquist, I.; Baker, W.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Borgland, A.; Bowles, M. A.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Calkins, R.; Cartaro, C.; Cerdeño, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Gerbier, G.; Ghaith, M.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hong, Z.; Hoppe, E.; Hsu, L.; Huber, M. E.; Iyer, V.; Jardin, D.; Jastram, A.; Kelsey, M. H.; Kennedy, A.; Kubik, A.; Kurinsky, N. A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; Mandic, V.; Mast, N.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Orrell, J. L.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Poudel, S.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Roberts, A.; Robinson, A. E.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Serfass, B.; Speller, D.; Stein, M.; Street, J.; Tanaka, H. A.; Toback, D.; Underwood, R.; Villano, A. N.; von Krosigk, B.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, X.; Zhao, X.

    2017-04-01

    SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass particles (with masses ≤ 10 GeV/c^2) that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~1×10^-43 cm^2 for a dark matter particle mass of 1 GeV/c^2, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low-energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced H-3 and naturally occurring Si-32 will be present in the detectors at some level. Even if these backgrounds are 10 times higher than expected, the science reach of the HV detectors would be over 3 orders of magnitude beyond current results for a dark matter mass of 1 GeV/c^2. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particles with masses ≳5 GeV/c^2. The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the “neutrino floor,” where coherent scatters of solar neutrinos become a limiting background.

  12. MARKETING MODELS APPLICATION EXPERIENCE

    Directory of Open Access Journals (Sweden)

    A. Yu. Rymanov

    2011-01-01

    Full Text Available Marketing models are used for the assessment of such marketing elements as sales volume, market share, market attractiveness, advertizing costs, product pushing and selling, profit, profitableness. Classification of buying process decision taking models is presented. SWOT- and GAPbased models are best for selling assessments. Lately, there is a tendency to transfer from the assessment on the ba-sis of financial indices to that on the basis of those non-financial. From the marketing viewpoint, most important are long-term company activities and consumer drawingmodels as well as market attractiveness operative models.

  13. Modelling Urban Experiences

    DEFF Research Database (Denmark)

    Jantzen, Christian; Vetner, Mikael

    2008-01-01

    How can urban designers develop an emotionally satisfying environment not only for today's users but also for coming generations? Which devices can they use to elicit interesting and relevant urban experiences? This paper attempts to answer these questions by analyzing the design of Zuidas, a new...

  14. Sensitivity of the ATLAS experiment to discover the decay H{yields} {tau}{tau} {yields}ll+4{nu} of the Standard Model Higgs Boson produced in vector boson fusion

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, Martin

    2011-05-17

    A study of the expected sensitivity of the ATLAS experiment to discover the Standard Model Higgs boson produced via vector boson fusion (VBF) and its decay to H{yields} {tau}{tau}{yields} ll+4{nu} is presented. The study is based on simulated proton-proton collisions at a centre-of-mass energy of 14 TeV. For the first time the discovery potential is evaluated in the presence of additional proton-proton interactions (pile-up) to the process of interest in a complete and consistent way. Special emphasis is placed on the development of background estimation techniques to extract the main background processes Z{yields}{tau}{tau} and t anti t production using data. The t anti t background is estimated using a control sample selected with the VBF analysis cuts and the inverted b-jet veto. The dominant background process Z{yields}{tau}{tau} is estimated using Z{yields}{mu}{mu} events. Replacing the muons of the Z{yields}{mu}{mu} event with simulated {tau}-leptons, Z{yields}{tau}{tau} events are modelled to high precision. For the replacement of the Z boson decay products a dedicated method based on tracks and calorimeter cells is developed. Without pile-up a discovery potential of 3{sigma} to 3.4{sigma} in the mass range 115 GeVsensitivity decreases to 1.7{sigma} to 1.9{sigma} mainly caused by the worse resolution of the reconstructed missing transverse energy.

  15. Sensitivity of the ATLAS experiment to discover the decay H→ ττ →ll+4ν of the Standard Model Higgs Boson produced in vector boson fusion

    International Nuclear Information System (INIS)

    Schmitz, Martin

    2011-01-01

    A study of the expected sensitivity of the ATLAS experiment to discover the Standard Model Higgs boson produced via vector boson fusion (VBF) and its decay to H→ ττ→ ll+4ν is presented. The study is based on simulated proton-proton collisions at a centre-of-mass energy of 14 TeV. For the first time the discovery potential is evaluated in the presence of additional proton-proton interactions (pile-up) to the process of interest in a complete and consistent way. Special emphasis is placed on the development of background estimation techniques to extract the main background processes Z→ττ and t anti t production using data. The t anti t background is estimated using a control sample selected with the VBF analysis cuts and the inverted b-jet veto. The dominant background process Z→ττ is estimated using Z→μμ events. Replacing the muons of the Z→μμ event with simulated τ-leptons, Z→ττ events are modelled to high precision. For the replacement of the Z boson decay products a dedicated method based on tracks and calorimeter cells is developed. Without pile-up a discovery potential of 3σ to 3.4σ in the mass range 115 GeV H -1 . In the presence of pile-up the signal sensitivity decreases to 1.7σ to 1.9σ mainly caused by the worse resolution of the reconstructed missing transverse energy.

  16. Experiment of solidifying photo sensitive polymer by using UV LED

    Science.gov (United States)

    Kang, Byoung Hun; Shin, Sung Yeol

    2008-11-01

    The development of Nano/Micro manufacturing technologies is growing rapidly and in the same manner, the investments in these areas are increasing. The applications of Nano/Micro technologies are spreading out to semiconductor production technology, biotechnology, environmental engineering, chemical engineering and aerospace. Especially, SLA is one of the most popular applications which is to manufacture 3D shaped microstructure by using UV laser and photo sensitive polymer. To make a high accuracy and precision shape of microstructures that are required from the diverse industrial fields, the information of interaction relationship between the photo resin and the light source is necessary for further research. Experiment of solidifying photo sensitive polymer by using UV LED is the topic of this paper and the purpose of this study is to find out what relationships do the reaction of the resin have in various wavelength, power of the light and time.

  17. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  18. CHARM2000 (Future of High-Sensitivity Charm Experiments)

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    The fourth quark, charm, may not hit the headlines these days as much as its heavier cousins, but it has still a lot of physics to give. From June 7-9 over 100 attendees heard 35 plenary talks at Fermilab on the Future of High-Sensitivity Charm Experiments. Twelve working groups focused on the physics opportunities and technical challenges. Speakers representing the CLEO (Cornell), BES (Beijing), SLAC B-Factory, Fermilab E653, E687/831, E769/791, E781, and CERN WA82/92 and WA89 collaborations reviewed the current status and future prospects.

  19. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  20. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  1. Sensitivity Study of Stochastic Walking Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2010-01-01

    is to employ a stochastic load model accounting for mean values and standard deviations for the walking load parameters, and to use this as a basis for estimation of structural response. This, however, requires decisions to be made in terms of statistical istributions and their parameters, and the paper...... investigates whether statistical distributions of bridge response are sensitive to some of the decisions made by the engineer doing the analyses. For the paper a selected part of potential influences are examined and footbridge responses are extracted using Monte-Carlo simulations and focus is on estimating...

  2. Applying incentive sensitization models to behavioral addiction.

    Science.gov (United States)

    Rømer Thomsen, Kristine; Fjorback, Lone O; Møller, Arne; Lou, Hans C

    2014-09-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Design sensitivity and statistical power in acceptability judgment experiments

    Directory of Open Access Journals (Sweden)

    Jon Sprouse

    2017-02-01

    Full Text Available Previous investigations into the validity of acceptability judgment data have focused almost exclusively on 'type I errors '(or false positives because of the consequences of such errors for syntactic theories (Sprouse & Almeida 2012; Sprouse et al. 2013. The current study complements these previous studies by systematically investigating the 'type II error rate '(false negatives, or equivalently, the 'statistical power', of a wide cross-section of possible acceptability judgment experiments. Though type II errors have historically been assumed to be less costly than type I errors, the dynamics of scientific publishing mean that high type II error rates (i.e., studies with low statistical power can lead to increases in type I error rates in a given field of study. We present a set of experiments and resampling simulations to estimate statistical power for four tasks (forced-choice, Likert scale, magnitude estimation, and yes-no, 50 effect sizes instantiated by real phenomena, sample sizes from 5 to 100 participants, and two approaches to statistical analysis (null hypothesis and Bayesian. Our goals are twofold (i to provide a fuller picture of the status of acceptability judgment data in syntax, and (ii to provide detailed information that syntacticians can use to design and evaluate the sensitivity of acceptability judgment experiments in their own research.

  4. Tackifier Mobility in Model Pressure Sensitive Adhesives

    Science.gov (United States)

    Paiva, Adriana; Li, Xiaoqing

    1997-03-01

    A systematic study of the molecular mobility of tackifier in a pressure sensitive adhesive (PSA) has been done for the first time. The objective is to relate changes in adhesive performance with tackifier loading to tackifier mobility. Study focused first on a model PSA consisting of anionically polymerized polyisoprene (PI) (Mw=300,000 Mw/Mn 1.05) and a single simple tackifier, n-butyl ester of abietic acid. This model system is fully miscible at room temperature, and its tack performance has been studied. Tackifier mobility was measured using Pulsed-Gradient Spin-Echo NMR as a function of tackifier concentration and temperature. The concentration dependence observed for this adhesive with modestly enhanced performance was weak, indicating the tackifier neither acts to plasticize or antiplasticize appreciably. Diffusion in a two-phase system of hydrogenated PI with the same tackifier is similar, though the tack of that adhesive varies much more markedly with composition. In contrast, tackifier mobility varies strongly with composition in a PSA composed of PI with a commercial tackifier chemically similar to the model tackifier, but having a higher molecular weight and glass transition temperature. * Supported in part by US DOD: ARO(DAAH04-93-G-0410)

  5. Sensitivity analyses of the peach bottom turbine trip 2 experiment

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2003-01-01

    In the light of the sustained development in computer technology, the possibilities for code calculations in predicting more realistic transient scenarios in nuclear power plants have been enlarged substantially. Therefore, it becomes feasible to perform 'Best-estimate' simulations through the incorporation of three-dimensional modeling of reactor core into system codes. This method is particularly suited for complex transients that involve strong feedback effects between thermal-hydraulics and kinetics as well as to transient involving local asymmetric effects. The Peach bottom turbine trip test is characterized by a prompt core power excursion followed by a self limiting power behavior. To emphasize and understand the feedback mechanisms involved during this transient, a series of sensitivity analyses were carried out. This should allow the characterization of discrepancies between measured and calculated trends and assess the impact of the thermal-hydraulic and kinetic response of the used models. On the whole, the data comparison revealed a close dependency of the power excursion with the core feedback mechanisms. Thus for a better best estimate simulation of the transient, both of the thermal-hydraulic and the kinetic models should be made more accurate. (author)

  6. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  7. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  8. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    The problem of derivation and calculation of sensitivity functions for all parameters of the mass balance reduced model of the COST benchmark activated sludge plant is formulated and solved. The sensitivity functions, equations and augmented sensitivity state space models are derived for the cases of ASM1 and UCT ...

  9. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  10. Complete Sensitivity/Uncertainty Analysis of LR-0 Reactor Experiments with MSRE FLiBe Salt and Perform Comparison with Molten Salt Cooled and Molten Salt Fueled Reactor Models

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nicholas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Powers, Jeffrey J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mueller, Don [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    In September 2016, reactor physics measurements were conducted at Research Centre Rez (RC Rez) using the FLiBe (2 7LiF + BeF2) salt from the Molten Salt Reactor Experiment (MSRE) in the LR-0 low power nuclear reactor. These experiments were intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems using FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL), in collaboration with RC Rez, performed sensitivity/uncertainty (S/U) analyses of these experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objectives of these analyses were (1) to identify potential sources of bias in fluoride salt-cooled and salt-fueled reactor simulations resulting from cross section uncertainties, and (2) to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a final report on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. In the future, these S/U analyses could be used to inform the design of additional FLiBe-based experiments using the salt from MSRE. The key finding of this work is that, for both solid and liquid fueled fluoride salt reactors, radiative capture in 7Li is the most significant contributor to potential bias in neutronics calculations within the FLiBe salt.

  11. Global sensitivity analysis of thermomechanical models in modelling of welding

    International Nuclear Information System (INIS)

    Petelet, M.

    2008-01-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  12. Is structural sensitivity a problem of oversimplified biological models? Insights from nested Dynamic Energy Budget models.

    Science.gov (United States)

    Aldebert, Clement; Kooi, Bob W; Nerini, David; Poggiale, Jean-Christophe

    2018-03-14

    Many current issues in ecology require predictions made by mathematical models, which are built on somewhat arbitrary choices. Their consequences are quantified by sensitivity analysis to quantify how changes in model parameters propagate into an uncertainty in model predictions. An extension called structural sensitivity analysis deals with changes in the mathematical description of complex processes like predation. Such processes are described at the population scale by a specific mathematical function taken among similar ones, a choice that can strongly drive model predictions. However, it has only been studied in simple theoretical models. Here, we ask whether structural sensitivity is a problem of oversimplified models. We found in predator-prey models describing chemostat experiments that these models are less structurally sensitive to the choice of a specific functional response if they include mass balance resource dynamics and individual maintenance. Neglecting these processes in an ecological model (for instance by using the well-known logistic growth equation) is not only an inappropriate description of the ecological system, but also a source of more uncertain predictions. Copyright © 2018. Published by Elsevier Ltd.

  13. Metals Are Important Contact Sensitizers: An Experience from Lithuania

    Directory of Open Access Journals (Sweden)

    Kotryna Linauskienė

    2017-01-01

    Full Text Available Background. Metals are very frequent sensitizers causing contact allergy and allergic contact dermatitis worldwide; up-to-date data based on patch test results has proved useful for the identification of a problem. Objectives. In this retrospective study prevalence of contact allergy to metals (nickel, chromium, palladium, gold, cobalt, and titanium in Lithuania is analysed. Patients/Methods. Clinical and patch test data of 546 patients patch tested in 2014–2016, in Vilnius University Hospital Santariskiu Klinikos, was analysed and compared with previously published data. Results. Almost third of tested patients (29.56% were sensitized to nickel. Younger women were more often sensitized to nickel than older ones (36% versus 22.8%, p=0.0011. Women were significantly more often sensitized to nickel than men (33% versus 6.1%, p<0.0001. Younger patients were more often sensitized to cobalt (11.6% versus 5.7%, p=0.0183. Sensitization to cobalt was related to sensitization to nickel (p<0.0001. Face dermatitis and oral discomfort were related to gold allergy (28% versus 6.9% dermatitis of other parts, p<0.0001. Older patients were patch test positive to gold(I sodium thiosulfate statistically significantly more often than younger ones (44.44% versus 21.21%, p=0.0281. Conclusions. Nickel, gold, cobalt, and chromium are leading metal sensitizers in Lithuania. Cobalt sensitization is often accompanied by sensitization to nickel. Sensitivity rate to palladium and nickel indicates possible cross-reactivity. No sensitization to titanium was found.

  14. Temperature sensitivity of a numerical pollen forecast model

    Science.gov (United States)

    Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone

    2016-04-01

    Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.

  15. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  16. Debris flows: Experiments and modelling

    Science.gov (United States)

    Turnbull, Barbara; Bowman, Elisabeth T.; McElwaine, Jim N.

    2015-01-01

    Debris flows and debris avalanches are complex, gravity-driven currents of rock, water and sediments that can be highly mobile. This combination of component materials leads to a rich morphology and unusual dynamics, exhibiting features of both granular materials and viscous gravity currents. Although extreme events such as those at Kolka Karmadon in North Ossetia (2002) [1] and Huascarán (1970) [2] strongly motivate us to understand how such high levels of mobility can occur, smaller events are ubiquitous and capable of endangering infrastructure and life, requiring mitigation. Recent progress in modelling debris flows has seen the development of multiphase models that can start to provide clues of the origins of the unique phenomenology of debris flows. However, the spatial and temporal variations that debris flows exhibit make this task challenging and laboratory experiments, where boundary and initial conditions can be controlled and reproduced, are crucial both to validate models and to inspire new modelling approaches. This paper discusses recent laboratory experiments on debris flows and the state of the art in numerical models.

  17. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Three levels of sensitivity, herein termed sensitivity, ratings, were established, namely: Highly Sensitive (Rating:1); Moderately sensitive' (Rating:2); and 'not too sensitive'(Rating: 3). The ratings were based on the amount of error in the measured parameter to introduce + 10% relative error in the predicted Et. The level of ...

  18. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  19. Oral sensitization to food proteins: A Brown Norway rat model

    NARCIS (Netherlands)

    Knippels, L.M.J.; Penninks, A.H.; Spanhaak, S.; Houben, G.F.

    1998-01-01

    Background: Although several in vivo antigenicity assays using parenteral immunization are operational, no adequate enteral sensitization models are available to study food allergy and allergenicity of food proteins. Objective: This paper describes the development of an enteral model for food

  20. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    Sensitivity analysis is a tool used in the assessment of a model's performance. This study examined the application of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study area is Effurun, Uvwie Local Government Area of Delta State of Nigeria. In order to ...

  1. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  2. A Sensitivity Study of the Navier-Stokes- α Model

    Science.gov (United States)

    Breckling, Sean; Neda, Monika

    2017-11-01

    We present a sensitivity study of the of the Navier Stokes- α model (NS α) with respect to perturbations of the differential filter length α. Parameter-sensitivity is evaluated using the sensitivity equations method. Once formulated, the sensitivity equations are discretized and computed alongside the NS α model using the same finite elements in space, and Crank-Nicolson in time. We provide a complete stability analysis of the scheme, along with the sensitivity results of several benchmark problems in both 2D and 3D. We further demonstrate a practical technique to determine the reliability of the NS α model in problem-specific settings. Lastly, we investigate the sensitivity and reliability of important functionals of the velocity and pressure solutions.

  3. Coping with drought: the experience of water sensitive urban design ...

    African Journals Online (AJOL)

    This study investigated the extent of Water Sensitive Urban Design (WSUD) activities in the George Municipality in the Western Cape Province, South Africa, and its impact on water consumption. The WSUD approach aims to influence design and planning from the moment rainwater is captured in dams, to when it is treated, ...

  4. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  5. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  6. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher...... discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging...

  7. Significant Life Experiences Revisited: A Review of Research on Sources of Environmental Sensitivity.

    Science.gov (United States)

    Chawla, Louise

    1998-01-01

    States that environmental sensitivity, an important variable in environmental awareness and in the predisposition to take responsible environmental action, has been the subject of research in which sensitivity is associated with particular kinds of significant life experiences. Reviews studies of the significant life experiences of environmental…

  8. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, P.M.; Madsen, Kristoffer H; Lund, T.E.

    on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...... show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher...

  9. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    Science.gov (United States)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  10. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  11. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  12. Modeling retinal high and low contrast sensitivity filters

    NARCIS (Netherlands)

    Lourens, T; Mira, J; Sandoval, F

    1995-01-01

    In this paper two types of ganglion cells in the visual system of mammals (monkey) are modeled. A high contrast sensitive type, the so called M-cells, which project to the two magno-cellular layers of the lateral geniculate nucleus (LGN) and a low sensitive type, the P-cells, which project to the

  13. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  14. Experimental issues in high-sensitivity charm experiments

    International Nuclear Information System (INIS)

    Appel, J.A.

    1994-07-01

    Progress in the exploration of charm physics at fixed target experiments has been prodigious over the last 15 years. The issue before the CHARM2000 Workshop is whether and how this progress can be continued beyond the next fixed target run. An equivalent of 10 8 fully reconstructed charm decays has been selected as a worthy goal. Underlying all this is the list of physics questions which can be answered by pursuing charm in this way. This paper reviews the experimental issues associated with making this next step. It draws heavily on the experience gathered over the period of rapid progress and, at the end, poses the questions of what is needed and what choices may need to be made

  15. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  16. New results from ADMX -- an ultra sensitive axion detection experiment

    Science.gov (United States)

    Asztalos, Steven J.

    2009-11-01

    Axions are hypothetical pseudoscalar particles that exist as a consequence of the Peccei-Quinn solution to the strong-CP problem. Light axions (ueV-meV) are also a natural cold dark matter candidate. One important detection technique is via resonant conversion to microwave photons in a high-Q cavity immersed in a strong magnetic field. In this class of experiment, the signal from the cavity is amplified by an ultralow noise amplifier, and mixed down to the audio frequency range using a double-heterodyne receiver. The power spectrum results by a Fast Fourier Transform, with the putative axion appearing as a narrow line at a frequency corresponding to its rest mass. This detection strategy provides the basis for the Axion Dark Matter eXperiment (ADMX) which has been taking data at Lawrence Livermore National Laboratory (LLNL) since 1996. ADMX has established limits in two distinct data channels - a medium resolution channel configured to search for ``thermalized'' axions and a high resolution channel for detecting axions that have recently fallen into the gravitational well of our galaxy. This talk will present an overview of the newly reconfigured experiment featuring an ultralow-noise first stage cryogenic SQUID amplifiers and present latest results from the two data channels.

  17. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  18. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... order to fit the reduced model behaviour to the real data for the process behaviour. Keywords: wastewater treatment, activated sludge process, reduced model, model parameters, sensitivity function, Matlab simulation. Introduction. The problem of effective and optimal control of wastewater treatment plants ...

  19. Experiments beyond the standard model

    International Nuclear Information System (INIS)

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references

  20. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  1. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  2. Immunization experiments using the rodent caries model.

    Science.gov (United States)

    Smith, D J; Taubman, M A

    1976-04-01

    Taken together, the immunization experiments which have been performed in the rat caries model system appear to suggest a correlation between the presence of salivary antibody to S mutans and reductions in caries caused by these bacteria. However, the multifactorial nature of this disease does not permit at present the conclusion that the presence of this antibody is both necessary and sufficient to give rise to the demonstrated effects on pathogenesis. To clarify the role of salivary antibody, several refinements may be required in the current model. Immunization procedures that elicit only a local antibody response would both simplify interpretations of effects and would be more desirable for use as a vaccine. Such procedures might include intraductal installation of antigen in the parotid gland which has been demonstrated to result in this type of response. An additional refinement stems from the knowledge that the kinds of immunization procedures currently used stimulated both cellular immune and soluble antibody systems, potentially giving rise to a rather broad spectrum of immune responses. Therefore, it might be useful to study the effects on S mutans pathogenesis in rats in which certain of these responses have been repressed, for example, by thymectomy, antilymphocyte serum, and so on. Also, each of these approaches would be measurably enhanced by more sensitive techniques to monitor immunological events in the oral cavity. Refinements in the selection and use of relevant antigens of S mutans also are necessary to delineate the in vivo mechanism of immunological interference in the pathogenesis of cariogenic streptococci. Approaches involve the use of purified GTF antigens or cell surface antigens both in the investigation of these mechanisms in in vitro models using antibody specifically directed to these antigens and in rat immunization experiments using immunogenic preparations of these materials. In addition, alterations in the diet and challenge

  3. Quantifying the sensitivity of oscillation experiments to the neutrino mass ordering

    Energy Technology Data Exchange (ETDEWEB)

    Blennow, Mattias [Department of Theoretical Physics, School of Engineering Sciences,KTH Royal Institute of Technology, AlbaNova University Center,106 91 Stockholm (Sweden); Coloma, Pilar; Huber, Patrick [Center for Neutrino Physics, Virginia Tech,Blacksburg, VA 24061 (United States); Schwetz, Thomas [Max-Planck-Institut für Kernphysik,Saupfercheckweg 1, 69117 Heidelberg (Germany); Oskar Klein Centre for Cosmoparticle Physics,Department of Physics, Stockholm University, SE-10691 Stockholm (Sweden)

    2014-03-05

    Determining the type of the neutrino mass ordering (normal versus inverted) is one of the most important open questions in neutrino physics. In this paper we clarify the statistical interpretation of sensitivity calculations for this measurement. We employ standard frequentist methods of hypothesis testing in order to precisely define terms like the median sensitivity of an experiment. We consider a test statistic T which in a certain limit will be normal distributed. We show that the median sensitivity in this limit is very close to standard sensitivities based on Δχ{sup 2} values from a data set without statistical fluctuations, such as widely used in the literature. Furthermore, we perform an explicit Monte Carlo simulation of the INO, JUNO, LBNE, NOνA, and PINGU experiments in order to verify the validity of the Gaussian limit, and provide a comparison of the expected sensitivities for those experiments.

  4. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  5. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2017-01-01

    to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...

  6. Modeling the Experience of Emotion

    OpenAIRE

    Broekens, Joost

    2009-01-01

    Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent...

  7. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  8. Modelling flow through unsaturated zones: Sensitivity to unsaturated ...

    Indian Academy of Sciences (India)

    A numerical model to simulate moisture flow through unsaturated zones is developed using the finite element method, and is validated by comparing the model results with those available in the literature. The sensitivities of different processes such as gravity drainage and infiltration to the variations in the unsaturated soil ...

  9. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  10. Modelling flow through unsaturated zones: Sensitivity to unsaturated ...

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    MS received 13 October 1997; revised 20 November 2001. Abstract. A numerical model to simulate moisture flow through unsaturated zones is developed using the finite element method, and is validated by comparing the model results with those available in the literature. The sensitivities of different processes such as ...

  11. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  12. Sensitive analysis of a finite element model of orthogonal cutting

    Science.gov (United States)

    Brocail, J.; Watremez, M.; Dubar, L.

    2011-01-01

    This paper presents a two-dimensional finite element model of orthogonal cutting. The proposed model has been developed with Abaqus/explicit software. An Arbitrary Lagrangian-Eulerian (ALE) formulation is used to predict chip formation, temperature, chip-tool contact length, chip thickness, and cutting forces. This numerical model of orthogonal cutting will be validated by comparing these process variables to experimental and numerical results obtained by Filice et al. [1]. This model can be considered to be reliable enough to make qualitative analysis of entry parameters related to cutting process and frictional models. A sensitivity analysis is conducted on the main entry parameters (coefficients of the Johnson-Cook law, and contact parameters) with the finite element model. This analysis is performed with two levels for each factor. The sensitivity analysis realised with the numerical model on the entry parameters has allowed the identification of significant parameters and the margin identification of parameters.

  13. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  14. Improvement of reflood model in RELAP5 code based on sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail: yanhuay@sjtu.edu.cn

    2016-07-15

    Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.

  15. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  16. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Science.gov (United States)

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.

  17. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Directory of Open Access Journals (Sweden)

    Sarah A Gerson

    Full Text Available In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early music perception and cognition.

  18. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    Science.gov (United States)

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  19. The carbohydrate sensitive rat as a model of obesity.

    Directory of Open Access Journals (Sweden)

    Nachiket A Nadkarni

    Full Text Available BACKGROUND: Sensitivity to obesity is highly variable in humans, and rats fed a high fat diet (HFD are used as a model of this inhomogeneity. Energy expenditure components (basal metabolism, thermic effect of feeding, activity and variations in substrate partitioning are possible factors underlying the variability. Unfortunately, in rats as in humans, results have often been inconclusive and measurements usually made after obesity onset, obscuring if metabolism was a cause or consequence. Additionally, the role of high carbohydrate diet (HCD has seldom been studied. METHODOLOGY/FINDINGS: Rats (n=24 were fed for 3 weeks on HCD and then 3 weeks on HFD. Body composition was tracked by MRI and compared to energy expenditure components measured prior to obesity. RESULTS: 1 under HFD, as expected, by adiposity rats were variable enough to be separable into relatively fat resistant (FR and sensitive (FS groups, 2 under HCD, and again by adiposity, rats were also variable enough to be separable into carbohydrate resistant (CR and sensitive (CS groups, the normal body weight of CS rats hiding viscerally-biased fat accumulation, 3 HCD adiposity sensitivity was not related to that under HFD, and both HCD and HFD adiposity sensitivities were not related to energy expenditure components (BMR, TEF, activity cost, and 4 only carbohydrate to fat partitioning in response to an HCD test meal was related to HCD-induced adiposity. CONCLUSIONS/SIGNIFICANCE: The rat model of human obesity is based on substantial variance in adiposity gains under HFD (FR/FS model. Here, since we also found this phenomenon under HCD, where it was also linked to an identifiable metabolic difference, we should consider the existence of another model: the carbohydrate resistant (CR or sensitive (CS rat. This new model is potentially complementary to the FR/FS model due to relatively greater visceral fat accumulation on a low fat high carbohydrate diet.

  20. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  1. THE MEDIUM-SENSITIVE EXPERIENCE AND THE PARADIGMATIC EXPERIENCE OF THE GROTESQUE, "UNNATURAL" OR "MONSTROUS"

    NARCIS (Netherlands)

    van den Oever, A. M. A.

    To create the conceptual space to analyze the evident and structural similarities between the art experience, the (new) media experience, and the media art experience, the author approaches the "medium" as "techniques" which "make [the seen] strange." A disruption of the perceptual process, a

  2. The Medium-Sensitive Experience and the Paradigmatic Experience of the Grotesque, 'Unnatural', or 'Monstrous'

    NARCIS (Netherlands)

    van den Oever, A.M.A.

    2013-01-01

    To create the conceptual space to analyze the evident and structural similarities between the art experience, the (new) media experience, and the media art experience, the author approaches the “medium” as “techniques” which “make [the seen] strange.” A disruption of the perceptual process, a

  3. Research on quasi-dynamic calibration model of plastic sensitive element based on neural networks

    Science.gov (United States)

    Wang, Fang; Kong, Deren; Yang, Lixia; Zhang, Zouzou

    2017-08-01

    Quasi-dynamic calibration accuracy of the plastic sensitive element depends on the accuracy of the fitting model between pressure and deformation. By using the excellent nonlinear mapping ability of RBF (Radial Basis Function) neural network, a calibration model is established which use the peak pressure as the input and use the deformation of the plastic sensitive element as the output in this paper. The calibration experiments of a batch of copper cylinders are carried out on the quasi-dynamic pressure calibration device, which pressure range is within the range of 200MPa to 700MPa. The experiment data are acquired according to the standard pressure monitoring system. The network train and study are done to quasi dynamic calibration model based on neural network by using MATLAB neural network toolbox. Taking the testing samples as the research object, the prediction accuracy of neural network model is compared with the exponential fitting model and the second-order polynomial fitting model. The results show that prediction of the neural network model is most close to the testing samples, and the accuracy of prediction model based on neural network is better than 0.5%, respectively one order higher than the second-order polynomial fitting model and two orders higher than the exponential fitting model. The quasi-dynamic calibration model between pressure peak and deformation of plastic sensitive element, which is based on neural network, provides important basis for creating higher accuracy quasi-dynamic calibration table.

  4. Active drumming experience increases infants' sensitivity to audiovisual synchrony during observed drumming actions

    NARCIS (Netherlands)

    Gerson, S.A.; Schiavio, A.A.R.; Timmers, R.; Hunnius, S.

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this

  5. A sensitivity driven meta-model optimisation tool for hydrological models

    Science.gov (United States)

    Oppel, Henning; Schumann, Andreas

    2017-04-01

    The calibration of rainfall-runoff-models containing a high number of parameters can be done readily by the use of different calibration methods and algorithms. Monte-Carlo Methods, gradient based search algorithms and others are well-known and established in hydrological sciences. Thus, the calibration of a model for a desired application is not a challenging task, but retaining regional comparability and process integrity is, due to the equifinality-problem, a prevailing topic. This set of issues is mainly a result of the overdeterminaton given the high number of parameters in rainfall-runoff-models, where different parameters are affecting the same facet of model performance (i.e. runoff volume, variance and timing). In this study a calibration strategy is presented which considers model sensitivity as well as parameter interaction and different criteria of model performance. At first a range of valid values for each model parameter was defined and the individual effect on model performance within the defined parameter range was evaluated. By use of the gained knowledge a meta-model, lumping different parameters affecting the same facet of model performance, was established. Hereafter, the parsimonious meta-model, where each parameter is assigned to a nearly disjoint facet of model performance is optimized. By retransformation of the lumped parameters to the original model, a parametrisation for the original model is obtained. An application of this routine to a set of watersheds in the eastern part of Germany displays the benefits of the routine. Results of the meta-parametrised model are compared to parametrisations obtained from common calibration routines in a validation study and process orientated numerical experiment.

  6. The doubled CO2 climate and the sensitivity of the modeled hydrologic cycle

    Science.gov (United States)

    Rind, D.

    1988-01-01

    Four doubled CO2 experiments with the GISS general circulation model are compared to investigate the consistency of changes in water availability over the United States. The experiments compare the influence of model sensitivity, model resolution, and the sea-surface temperature gradient. The results show that the general mid-latitude drying over land is dependent upon the degree of mid-latitude eddy energy decrease, and thus the degree of high-latitude temperature change amplification. There is a general tendency in the experiments for the northern and western United States to become wetter, while the southern and eastern portions dry. However, there is much variability from run to run, with different regions showing different degrees of sensitivity to the parameters tested. The results for the western United States depend most on model resolution; those for the central United States, on the sea-surface temperature gradient and the degree of mid-latitude ocean warming; and those for the eastern United States, on model sensitivity. The changes in particular seasons depend on changes in other seasons, and will therefore be sensitive to the realism of the ground hydrology parameterization.

  7. Numerical experiments modelling turbulent flows

    Science.gov (United States)

    Trefilík, Jiří; Kozel, Karel; Příhoda, Jaromír

    2014-03-01

    The work aims at investigation of the possibilities of modelling transonic flows mainly in external aerodynamics. New results are presented and compared with reference data and previously achieved results. For the turbulent flow simulations two modifications of the basic k - ω model are employed: SST and TNT. The numerical solution was achieved by using the MacCormack scheme on structured non-ortogonal grids. Artificial dissipation was added to improve the numerical stability.

  8. Numerical experiments modelling turbulent flows

    Directory of Open Access Journals (Sweden)

    Trefilík Jiří

    2014-03-01

    Full Text Available The work aims at investigation of the possibilities of modelling transonic flows mainly in external aerodynamics. New results are presented and compared with reference data and previously achieved results. For the turbulent flow simulations two modifications of the basic k – ω model are employed: SST and TNT. The numerical solution was achieved by using the MacCormack scheme on structured non-ortogonal grids. Artificial dissipation was added to improve the numerical stability.

  9. Sensitivity Experiments on the Impact of Vb-Cyclones to Ocean Temperature and Soil Moisture Changes

    Science.gov (United States)

    Messmer, Martina; José Gómez-Navarro, Juan; Raible, Christoph C.

    2016-04-01

    Cyclones developing over the western Mediterranean and move northeastward are a major source of extreme weather and responsible for heavy precipitation over Central Europe. Gaining insight into these processes is crucial to improve the projection of changes in frequency and severity of these so-called Vb-cyclones under future climate change scenarios. This study explores the impact of climate change on Vb-events through a number of idealized sensitivity experiments that assess the role of the sea surface temperature (SST) and soil moisture and their contribution to the moisture content in the atmosphere in recent Vb-events. To achieve this task, we use the Weather Research and Forecasting model (WRF) to dynamically downscale the ERA Interim reanalysis, simulating five prominent Vb-events that led to extreme precipitation in Central Europe. WRF allows simulating a physical consistent response of Vb-cyclones to different SSTs and soil water volumes. The changes in SSTs are designed to follow the expected temperature changes in a future climate scenario. Additionally the corresponding uncertainty in such projections is considered. Results indicate that although an increase of the Mediterranean SSTs leads to increased precipitation over Central Europe, e.g. 136% greater precipitation in the +5 K experiment compared to the control simulation, a change in the high-impact region of Vb-events at the northern side of the Alps is not found. This counter-intuitive behavior seems to be related to the increase of atmospheric instability over the artificially heated SSTs. Thereby, precipitation notably increases over the east Adriatic coast in response to warmer SSTs, which corresponds to the first location where the air is lifted. However, Vb-events become less destructive in their high-impact region, due to high loss of atmospheric water. Further experiments demonstrate that changing the SSTs of the Atlantic invokes almost no reaction (around 1% change) with respect to

  10. Sensitivity of wildlife habitat models to uncertainties in GIS data

    Science.gov (United States)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  11. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  12. Is Convection Sensitive to Model Vertical Resolution and Why?

    Science.gov (United States)

    Xie, S.; Lin, W.; Zhang, G. J.

    2017-12-01

    Model sensitivity to horizontal resolutions has been studied extensively, whereas model sensitivity to vertical resolution is much less explored. In this study, we use the US Department of Energy (DOE)'s Accelerated Climate Modeling for Energy (ACME) atmosphere model to examine the sensitivity of clouds and precipitation to the increase of vertical resolution of the model. We attempt to understand what results in the behavior change (if any) of convective processes represented by the unified shallow and turbulent scheme named CLUBB (Cloud Layers Unified by Binormals) and the Zhang-McFarlane deep convection scheme in ACME. A short-term hindcast approach is used to isolate parameterization issues from the large-scale circulation. The analysis emphasizes on how the change of vertical resolution could affect precipitation partitioning between convective- and grid-scale as well as the vertical profiles of convection-related quantities such as temperature, humidity, clouds, convective heating and drying, and entrainment and detrainment. The goal is to provide physical insight into potential issues with model convective processes associated with the increase of model vertical resolution. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  13. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  14. Early experience shapes amygdala sensitivity to race: an international adoption design.

    Science.gov (United States)

    Telzer, Eva H; Flannery, Jessica; Shapiro, Mor; Humphreys, Kathryn L; Goff, Bonnie; Gabard-Durman, Laurel; Gee, Dylan D; Tottenham, Nim

    2013-08-14

    In the current study, we investigated how complete infant deprivation to out-group race impacts behavioral and neural sensitivity to race. Although monkey models have successfully achieved complete face deprivation in early life, this is typically impossible in human studies. We overcame this barrier by examining youths with exclusively homogenous racial experience in early postnatal development. These were youths raised in orphanage care in either East Asia or Eastern Europe as infants and later adopted by American families. The use of international adoption bolsters confidence of infant exposure to race (e.g., to solely Asian faces or European faces). Participants completed an emotional matching task during functional MRI. Our findings show that deprivation to other-race faces in infancy disrupts recognition of emotion and results in heightened amygdala response to out-group faces. Greater early deprivation (i.e., later age of adoption) is associated with greater biases to race. These data demonstrate how early social deprivation to race shapes amygdala function later in life and provides support that early postnatal development may represent a sensitive period for race perception.

  15. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  16. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  17. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  18. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  19. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a deterministic system of first order ordinary differential equations. In this paper, we considered a sensitivity analysis of studying the qualitative ...

  20. A sensitive venous bleeding model in haemophilia A mice

    DEFF Research Database (Denmark)

    Pastoft, Anne Engedahl; Lykkesfeldt, Jens; Ezban, M.

    2012-01-01

    for evaluation of pro-coagulant compounds for treatment of haemophilia. Interestingly, the vena saphena model proved to be sensitive towards FVIII in plasma levels that approach the levels preventing bleeding in haemophilia patients, and may, thus, in particular be valuable for testing of new long...

  1. An approach to measure parameter sensitivity in watershed hydrological modelling

    Science.gov (United States)

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  2. A model for perception-based identification of sensitive skin

    NARCIS (Netherlands)

    Richters, R.J.H.; Uzunbajakava, N.E.; Hendriks, J.C.; Bikker, J.W.; Erp, P.E.J. van; Kerkhof, P.C.M. van de

    2017-01-01

    BACKGROUND: With high prevalence of sensitive skin (SS), lack of strong evidence on pathomechanisms, consensus on associated symptoms, proof of existence of 'general' SS and tools to recruit subjects, this topic attracts increasing attention of research. OBJECTIVE: To create a model for selecting

  3. Culturally Sensitive Dementia Caregiving Models and Clinical Practice

    Science.gov (United States)

    Daire, Andrew P.; Mitcham-Smith, Michelle

    2006-01-01

    Family caregiving for individuals with dementia is an increasingly complex issue that affects the caregivers' and care recipients' physical, mental, and emotional health. This article presents 3 key culturally sensitive caregiver models along with clinical interventions relevant for mental health counseling professionals.

  4. Global sensitivity analysis of GEOS-Chem modeled ozone and hydrogen oxides during the INTEX campaigns

    Science.gov (United States)

    Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu; Ren, Xinrong

    2018-02-01

    Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3), hydroxyl radical (OH), and hydroperoxyl radical (HO2) mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX) campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.

  5. Global sensitivity analysis of GEOS-Chem modeled ozone and hydrogen oxides during the INTEX campaigns

    Directory of Open Access Journals (Sweden)

    K. E. Christian

    2018-02-01

    Full Text Available Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3, hydroxyl radical (OH, and hydroperoxyl radical (HO2 mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.

  6. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya

    2017-10-03

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  7. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    2008-02-01

    Full Text Available Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity.Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity.When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse.Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  8. Sensitivity of corneal biomechanical and optical behavior to material parameters using design of experiments method.

    Science.gov (United States)

    Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung

    2018-02-01

    The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.

  9. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    companies automatically get a higher prize when offering an experience setting to the customer illustrated by the coffee example. Organizations that offer experiences still have an advantage but when an increasing number of organizations enter the experience economy the competition naturally gets tougher......Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...

  10. Sensitivity Analysis of a process based erosion model using FAST

    Science.gov (United States)

    Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin

    2015-04-01

    Erosion, sediment redistribution and related particulate transport are severe problems in agro-ecosystems with highly erodible loess soils. They are controlled by various factors, for example rainfall intensity, topography, initial wetness conditions, spatial patterns of soil hydraulic parameters, land use and tillage practice. The interplay between those factors is not well understood. A number of models were developed to indicate those complex interactions and to estimate the amount of sediment which will be removed, transported and accumulated. In order to make use of physical-based models to provide insight on the physical system under study it is necessary to understand the interactions of parameters and processes in the model domain. Sensitivity analyses give insight in the relative importance of model parameters, which in addition is useful for judging where the greatest efforts have to be spent in acquiring or calibrating input parameters. The objective of this study was to determine the sensitivity of the erosion-related parameters in the CATFLOW model. We analysed simulations from the Weiherbach catchment, where good matches of observed hydrological response and erosion dynamics had been obtained in earlier studies. The Weiherbach catchment is located in an intensively cultivated loess region in southwest Germany and due to the hilly landscape and the highly erodible loess soils, erosion is a severe environmental problem. CATFLOW is a process-based hydrology and erosion model that can operate on catchment and hillslope scales. Soil water dynamics are described by the Richards equation including effective approaches for preferential flow. Evapotranspiration is simulated using an approach based on the Penman-Monteith equation. The model simulates overland flow using the diffusion wave equation. Soil detachment is related to the attacking forces of rainfall and overland flow, and the erosion resistance of the soil. Sediment transport capacity and sediment

  11. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  12. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, P.M.; Madsen, Kristoffer H; Lund, T.E.

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  13. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  14. Organic polyaromatic hydrocarbons as sensitizing model dyes for semiconductor nanoparticles.

    Science.gov (United States)

    Zhang, Yongyi; Galoppini, Elena

    2010-04-26

    The study of interfacial charge-transfer processes (sensitization) of a dye bound to large-bandgap nanostructured metal oxide semiconductors, including TiO(2), ZnO, and SnO(2), is continuing to attract interest in various areas of renewable energy, especially for the development of dye-sensitized solar cells (DSSCs). The scope of this Review is to describe how selected model sensitizers prepared from organic polyaromatic hydrocarbons have been used over the past 15 years to elucidate, through a variety of techniques, fundamental aspects of heterogeneous charge transfer at the surface of a semiconductor. This Review does not focus on the most recent or efficient dyes, but rather on how model dyes prepared from aromatic hydrocarbons have been used, over time, in key fundamental studies of heterogeneous charge transfer. In particular, we describe model chromophores prepared from anthracene, pyrene, perylene, and azulene. As the level of complexity of the model dye-bridge-anchor group compounds has increased, the understanding of some aspects of very complex charge transfer events has improved. The knowledge acquired from the study of the described model dyes is of importance not only for DSSC development but also to other fields of science for which electronic processes at the molecule/semiconductor interface are relevant.

  15. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. The PIENU experiment at TRIUMF : a sensitive probe for new physics

    International Nuclear Information System (INIS)

    Malbrunot, Chloe; Bryman, D A; Hurst, C; Aguilar-Arevalo, A A; Aoki, M; Ito, N; Kuno, Y; Blecher, M; Britton, D I; Chen, S; Ding, M; Comfort, J; Doornbos, J; Doria, L; Gumplinger, P; Kurchaninov, L; Hussein, A; Igarashi, Y; Kettell, S; Littenberg, L

    2011-01-01

    Study of rare decays is an important approach for exploring physics beyond the Standard Model (SM). The branching ratio of the helicity suppressed pion decays, R = Γ(π + → e + ν e +π + → e + ν e γ/π + → μ + ν μ + π + → μ + ν μ γ, is one of the most accurately calculated decay process involving hadrons and has so far provided the most stringent test of the hypothesis of electron-muon universality in weak interactions. The branching ratio has been calculated in the SM to better than 0.01% accuracy to be R SM = 1.2353(1) x 10. The PIENU experiment at TRIUMF, which started taking physics data in September 2009, aims to reach an accuracy five times better than the previous experiments, so as to confront the theoretical calculation at the level of ±0.1%. If a deviation from the R SM is found, 'new physics' beyond the SM, at potentially very high mass scales (up to 1000 TeV), could be revealed. Alternatively, sensitive constraints on hypotheses can be obtained for interactions involving pseudoscalar or scalar interactions. So far, 4 million π + → e + ν e events have been accumulated by PIENU. This paper will outline the physics motivations, describe the apparatus and techniques designed to achieve high precision and present the latest results.

  17. Stochastic sensitivity of a bistable energy model for visual perception

    Science.gov (United States)

    Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.

  18. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pohl, Andrew Phillip [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jordan, Dirk [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  19. Recursive Model Identification for the Evaluation of Baroreflex Sensitivity.

    Science.gov (United States)

    Le Rolle, Virginie; Beuchée, Alain; Praud, Jean-Paul; Samson, Nathalie; Pladys, Patrick; Hernández, Alfredo I

    2016-12-01

    A method for the recursive identification of physiological models of the cardiovascular baroreflex is proposed and applied to the time-varying analysis of vagal and sympathetic activities. The proposed method was evaluated with data from five newborn lambs, which were acquired during injection of vasodilator and vasoconstrictors and the results show a close match between experimental and simulated signals. The model-based estimation of vagal and sympathetic contributions were consistent with physiological knowledge and the obtained estimators of vagal and sympathetic activities were compared to traditional markers associated with baroreflex sensitivity. High correlations were observed between traditional markers and model-based indices.

  20. Gradient-Enhanced Triple-Resonance Three-Dimensional NMR Experiments with Improved Sensitivity

    Science.gov (United States)

    Muhandiram, D. R.; Kay, L. E.

    1994-03-01

    The sensitivities of a number of gradient and nongradient versions of triple-resonance experiments are compared by quantitating the signal-to-noise ratios in spectra recorded on Cellulomonas fimi cellulose binding domain (110 amino acids), Xenopus laevis calmodulin (148 amino acids), Mycococcus xanthus protein S (173 amino acids), and a 93-amino acid fragment of protein S. It is shown that it is possible to construct sensitivity-enhanced gradient experiments, with 15N selection achieved via pulsed field gradients, that are as sensitive as their sensitivity-enhanced nongradient counterparts and significantly more sensitive than other gradient approaches. These sequences are very closely related to the family of improved-sensitivity sequences proposed by Rance and co-workers (A. G. Palmer, J. Cavanagh, P. E. Wright, and M. Rance, J. Magn. Reson.93, 151, 1991). The use of gradients greatly improves the quality of water suppression and reduces both the number of artifacts and the phase-cycling requirements at no cost in sensitivity for the proteins considered in this study.

  1. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  2. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  3. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  4. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  5. Modeling Choice and Valuation in Decision Experiments

    Science.gov (United States)

    Loomes, Graham

    2010-01-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for…

  6. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe......Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes...

  7. The sensitivity of catchment runoff models to rainfall data at different spatial scales

    Directory of Open Access Journals (Sweden)

    V. A. Bell

    2000-01-01

    Full Text Available The sensitivity of catchment runoff models to rainfall is investigated at a variety of spatial scales using data from a dense raingauge network and weather radar. These data form part of the HYREX (HYdrological Radar EXperiment dataset. They encompass records from 49 raingauges over the 135 km2 Brue catchment in south-west England together with 2 and 5 km grid-square radar data. Separate rainfall time-series for the radar and raingauge data are constructed on 2, 5 and 10 km grids, and as catchment average values, at a 15 minute time-step. The sensitivity of the catchment runoff models to these grid scales of input data is evaluated on selected convective and stratiform rainfall events. Each rainfall time-series is used to produce an ensemble of modelled hydrographs in order to investigate this sensitivity. The distributed model is shown to be sensitive to the locations of the raingauges within the catchment and hence to the spatial variability of rainfall over the catchment. Runoff sensitivity is strongest during convective rainfall when a broader spread of modelled hydrographs results, with twice the variability of that arising from stratiform rain. Sensitivity to rainfall data and model resolution is explored and, surprisingly, best performance is obtained using a lower resolution of rainfall data and model. Results from the distributed catchment model, the Simple Grid Model, are compared with those obtained from a lumped model, the PDM. Performance from the distributed model is found to be only marginally better during stratiform rain (R2 of 0.922 compared to 0.911 but significantly better during convective rain (R2 of 0.953 compared to 0.909. The improved performance from the distributed model can, in part, be accredited to the excellence of the dense raingauge network which would not be the norm for operational flood warning systems. In the final part of the paper, the effect of rainfall resolution on the performance of the 2 km distributed

  8. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  9. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  10. Developing cultural sensitivity: nursing students' experiences of a study abroad programme.

    Science.gov (United States)

    Ruddock, Heidi C; Turner, de Sales

    2007-08-01

    This paper is a report of a study to explore whether having an international learning experience as part of a nursing education programme promoted cultural sensitivity in nursing students. background: Many countries are becoming culturally diverse, but healthcare systems and nursing education often remain mono-cultural and focused on the norms and needs of the majority culture. To meet the needs of all members of multicultural societies, nurses need to develop cultural sensitivity and incorporate this into caregiving. A Gadamerian hermeneutic phenomenological approach was adopted. Data were collected in 2004 by using in-depth conversational interviews and analysed using the Turner method. Developing cultural sensitivity involves a complex interplay between becoming comfortable with the experience of making a transition from one culture to another, making adjustments to cultural differences, and growing personally. Central to this process was the students' experience of studying in an unfamiliar environment, experiencing stress and varying degrees of culture shock, and making a decision to take on the ways of the host culture. These actions led to an understanding that being sensitive to another culture required being open to its dynamics, acknowledging social and political structures, and incorporating other people's beliefs about health and illness. The findings suggest that study abroad is a useful strategy for bridging the theory-practice divide. However, further research is needed with larger and more diverse students to test the generalizability of the findings. Longitudinal research is also needed to assess the impact of study abroad programmes on the deliver of culturally sensitive care.

  11. Modeling of laser-driven hydrodynamics experiments

    Science.gov (United States)

    di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul

    2017-10-01

    Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.

  12. Evaluating the Hydrologic Sensitivities of Three Land Surface Models to Bound Uncertainties in Runoff Projections

    Science.gov (United States)

    Chiao, T.; Nijssen, B.; Stickel, L.; Lettenmaier, D. P.

    2013-12-01

    Hydrologic modeling is often used to assess the potential impacts of climate change on water availability and quality. A common approach in these studies is to calibrate the selected model(s) to reproduce historic stream flows prior to the application of future climate projections. This approach relies on the implicit assumptions that the sensitivities of these models to meteorological fluctuations will remain relatively constant under climate change and that these sensitivities are similar among models if all models are calibrated to the same historic record. However, even if the models are able to capture the historic variability in hydrological variables, differences in model structure and parameter estimation contribute to the uncertainties in projected runoff, which confounds the incorporation of these results into water resource management decision-making. A better understanding of the variability in hydrologic sensitivities between different models can aid in bounding this uncertainty. In this research, we characterized the hydrologic sensitivities of three watershed-scale land surface models through a case study of the Bull Run watershed in Northern Oregon. The Distributed Hydrology Soil Vegetation Model (DHSVM), Precipitation-Runoff Modeling System (PRMS), and Variable Infiltration Capacity model (VIC) were implemented and calibrated individually to historic streamflow using a common set of long-term, gridded forcings. In addition to analyzing model performances for a historic period, we quantified the temperature sensitivity (defined as change in runoff in response to change in temperature) and precipitation elasticity (defined as change in runoff in response to change in precipitation) of these three models via perturbation of the historic climate record using synthetic experiments. By comparing how these three models respond to changes in climate forcings, this research aims to test the assumption of constant and similar hydrologic sensitivities. Our

  13. Local sensitivity analysis of a distributed parameters water quality model

    International Nuclear Information System (INIS)

    Pastres, R.; Franco, D.; Pecenik, G.; Solidoro, C.; Dejak, C.

    1997-01-01

    A local sensitivity analysis is presented of a 1D water-quality reaction-diffusion model. The model describes the seasonal evolution of one of the deepest channels of the lagoon of Venice, that is affected by nutrient loads from the industrial area and heat emission from a power plant. Its state variables are: water temperature, concentrations of reduced and oxidized nitrogen, Reactive Phosphorous (RP), phytoplankton, and zooplankton densities, Dissolved Oxygen (DO) and Biological Oxygen Demand (BOD). Attention has been focused on the identifiability and the ranking of the parameters related to primary production in different mixing conditions

  14. The skin sensitization potential of resorcinol: experience with the local lymph node assay.

    Science.gov (United States)

    Basketter, David A; Sanders, David; Jowsey, Ian R

    2007-04-01

    Resorcinol is a simple aromatic chemical (1,3-benzenediol) that has found widespread use, particularly as a coupler in hair dyes. Clinical experience clearly shows that resorcinol is a (albeit uncommon) skin sensitizer. By contrast, predictive methods, both animal and human, have previously failed to identify resorcinol as such. Here, we describe the outcome of a recent local lymph node assay performed in accordance with Organisation for Economic Co-operation and Development guideline 429, which correctly identified resorcinol as a skin sensitizer. Clear evidence of a dose response was apparent, and an EC3 value of approximately 6% was calculated. This suggests that the skin-sensitizing potency of resorcinol is approximately 2 orders of magnitude lower than that of p-phenylenediamine but similar to that of hexyl cinnamic aldehyde. These data show the importance of adherence to test guidelines and aligns the clinical experience with resorcinol with that obtained in predictive animal methods.

  15. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  16. Language Sensitivity, the RESPECT Model, and Continuing Education.

    Science.gov (United States)

    Aycock, Dawn M; Sims, Traci T; Florman, Terri; Casseus, Karis T; Gordon, Paula M; Spratling, Regena G

    2017-11-01

    Some words and phrases used by health care providers may be perceived as insensitive by patients, which could negatively affect patient outcomes and satisfaction. However, a distinct concept that can be used to describe and synthesize these words and phrases does not exist. The purpose of this article is to propose the concept of language sensitivity, defined as the use of respectful, supportive, and caring words with consideration for a patient's situation and diagnosis. Examples of how language sensitivity may be lacking in nurse-patient interactions are described, and solutions are provided using the RESPECT (Rapport, Environment/Equipment, Safety, Privacy, Encouragement, Caring/Compassion, and Tact) model. RESPECT can be used as a framework to inform and remind nurses about the importance of sensitivity when communicating with patients. Various approaches can be used by nurse educators to promote language sensitivity in health care. Case studies and a lesson plan are included. J Contin Educ Nurs. 2017;48(11):517-524. Copyright 2017, SLACK Incorporated.

  17. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  18. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  19. Gradient-enhanced TOCSY experiments with improved sensitivity and solvent suppression.

    Science.gov (United States)

    Fulton, D B; Hrabal, R; Ni, F

    1996-09-01

    Gradient-enhanced versions of the homonuclear TOCSY experiment are described, with solvent suppression and sensitivity superior to that of a conventional TOCSY experiment. The pulse sequences are constructed by appending a WATERGATE module to a z-filtered TOCSY experiment. Pulsed-field gradients and appropriately phased selective rf pulses are used to maintain precise control of the water magnetization vector. Problems associated with radiation damping and spin-locking of the water magnetization are thus alleviated. The water magnetization is returned to equilibrium prior to each acquisition, which improves water suppression and minimizes signal losses due to saturation transfer.

  20. Building Cultural Sensitivity and Interprofessional Collaboration Through a Study Abroad Experience.

    Science.gov (United States)

    Gilliland, Irene; Attridge, Russell T; Attridge, Rebecca L; Maize, David F; McNeill, Jeanette

    2016-01-01

    Study abroad (SA) experiences for health professions students may be used to heighten cultural sensitivity to future patients and incorporate interprofessional education (IPE). Two groups of nursing and pharmacy students participated in an SA elective over a 2-year period, traveling to China and India. Both groups improved significantly in knowledge, awareness, and skills following the travel experiences. Student reflections indicate that the SA experience was transformative, changing their views of travel, other cultures, personal environment, collaboration with other health professionals, and themselves. Use of SA programs is a novel method to encourage IPE, with a focus on enhancing the acquisition of cultural competency skills. Copyright 2016, SLACK Incorporated.

  1. Sensitivity and uncertainty analysis of a polyurethane foam decomposition model

    Energy Technology Data Exchange (ETDEWEB)

    HOBBS,MICHAEL L.; ROBINSON,DAVID G.

    2000-03-14

    Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.

  2. Sensitivity in forward modeled hyperspectral reflectance due to phytoplankton groups

    Science.gov (United States)

    Manzo, Ciro; Bassani, Cristiana; Pinardi, Monica; Giardino, Claudia; Bresciani, Mariano

    2016-04-01

    Phytoplankton is an integral part of the ecosystem, affecting trophic dynamics, nutrient cycling, habitat condition, and fisheries resources. The types of phytoplankton and their concentrations are used to describe the status of water and the processes inside of this. This study investigates bio-optical modeling of phytoplankton functional types (PFT) in terms of pigment composition demonstrating the capability of remote sensing to recognize freshwater phytoplankton. In particular, a sensitivity analysis of simulated hyperspectral water reflectance (with band setting of HICO, APEX, EnMAP, PRISMA and Sentinel-3) of productive eutrophic waters of Mantua lakes (Italy) environment is presented. The bio-optical model adopted for simulating the hyperspectral water reflectance takes into account the reflectance dependency on geometric conditions of light field, on inherent optical properties (backscattering and absorption coefficients) and on concentrations of water quality parameters (WQPs). The model works in the 400-750nm wavelength range, while the model parametrization is based on a comprehensive dataset of WQP concentrations and specific inherent optical properties of the study area, collected in field surveys carried out from May to September of 2011 and 2014. The following phytoplankton groups, with their specific absorption coefficients, a*Φi(λ), were used during the simulation: Chlorophyta, Cyanobacteria with phycocyanin, Cyanobacteria and Cryptophytes with phycoerythrin, Diatoms with carotenoids and mixed phytoplankton. The phytoplankton absorption coefficient aΦ(λ) is modelled by multiplying the weighted sum of the PFTs, Σpia*Φi(λ), with the chlorophyll-a concentration (Chl-a). To highlight the variability of water reflectance due to variation of phytoplankton pigments, the sensitivity analysis was performed by keeping constant the WQPs (i.e., Chl-a=80mg/l, total suspended matter=12.58g/l and yellow substances=0.27m-1). The sensitivity analysis was

  3. A Workflow for Global Sensitivity Analysis of PBPK Models

    Directory of Open Access Journals (Sweden)

    Kevin eMcNally

    2011-06-01

    Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.

  4. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  5. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  6. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  7. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  8. Azimuthally sensitive Hanbury Brown-Twiss interferometry measured with the ALICE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Gramling, Johanna Lena

    2011-07-01

    Bose-Einstein correlations of identical pions emitted in high-energy particle collisions provide information about the size of the source region in space-time. If analyzed via HBT Interferometry in several directions with respect to the reaction plane, the shape of the source can be extracted. Hence, HBT Interferometry provides an excellent tool to probe the characteristics of the quark-gluon plasma possibly created in high-energy heavy-ion collisions. This thesis introduces the main theoretical concepts of particle physics, the quark gluon plasma and the technique of HBT interferometry. The ALICE experiment at the CERN Large Hadron Collider (LHC) is explained and the first azimuthallyintegrated results measured in Pb-Pb collisions at √(s{sub NN})=2.76 TeV with ALICE are presented. A detailed two-track resolution study leading to a global pair cut for HBT analyses has been performed, and a framework for the event plane determination has been developed. The results from azimuthally sensitive HBT interferometry are compared to theoretical models and previous measurements at lower energies. Oscillations of the transverse radii in dependence on the pair emission angle are observed, consistent with a source that is extended out-of-plane.

  9. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  10. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  11. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  12. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  13. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  14. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  15. Performance of high-resolution position-sensitive detectors developed for storage-ring decay experiments

    International Nuclear Information System (INIS)

    Yamaguchi, T.; Suzaki, F.; Izumikawa, T.; Miyazawa, S.; Morimoto, K.; Suzuki, T.; Tokanai, F.; Furuki, H.; Ichihashi, N.; Ichikawa, C.; Kitagawa, A.; Kuboki, T.; Momota, S.; Nagae, D.; Nagashima, M.; Nakamura, Y.; Nishikiori, R.; Niwa, T.; Ohtsubo, T.; Ozawa, A.

    2013-01-01

    Highlights: • Position-sensitive detectors were developed for storage-ring decay spectroscopy. • Fiber scintillation and silicon strip detectors were tested with heavy ion beams. • A new fiber scintillation detector showed an excellent position resolution. • Position and energy detection by silicon strip detectors enable full identification. -- Abstract: As next generation spectroscopic tools, heavy-ion cooler storage rings will be a unique application of highly charged RI beam experiments. Decay spectroscopy of highly charged rare isotopes provides us important information relevant to the stellar conditions, such as for the s- and r-process nucleosynthesis. In-ring decay products of highly charged RI will be momentum-analyzed and reach a position-sensitive detector set-up located outside of the storage orbit. To realize such in-ring decay experiments, we have developed and tested two types of high-resolution position-sensitive detectors: silicon strips and scintillating fibers. The beam test experiments resulted in excellent position resolutions for both detectors, which will be available for future storage-ring experiments

  16. Surface-sensitive molecular interferometry: beyond 3He spin echo experiments

    Science.gov (United States)

    Cantin, Joshua T.; Krems, Roman V.; Godsi, Oded; Maniv, Tsofar; Alexandrowicz, Gil

    2017-04-01

    3 He atoms can be used as surface-sensitive atomic interferometers in 3He spin echo experiments to measure surface morphology, molecular and atomic surface diffusion dynamics, and surface vibrations. However, using the hyperfine states of molecules gives experiments the potential to be less expensive, be more sensitive, and include angle-dependent interactions. The manifold of hyperfine states of molecules is large in comparison to the two nuclear spin states used in 3He spin echo experiments and allows for increased precision, while simultaneously complicating experimental interpretation. Here, we present the theoretical formulation required to interpret these experiments. In particular, we show how to determine the effect of magnetic lensing on the molecular hyperfine states and use a modified form of the transfer matrix method to quantum mechanically describe molecular propagation throughout the experiment. We also discuss how to determine the scattering matrix from the experimental observables via machine learning techniques. As an example, we perform numerical calculations using nine hyperfine states of ortho-hydrogen and compare the results to experiment. This work was funded by NSERC of Canada and the European Research Council under the European Union's seventh framework program (FP/2007-2013)/ERC Grant 307267.

  17. Earth system sensitivity inferred from Pliocene modelling and data

    Science.gov (United States)

    Lunt, D.J.; Haywood, A.M.; Schmidt, G.A.; Salzmann, U.; Valdes, P.J.; Dowsett, H.J.

    2010-01-01

    Quantifying the equilibrium response of global temperatures to an increase in atmospheric carbon dioxide concentrations is one of the cornerstones of climate research. Components of the Earths climate system that vary over long timescales, such as ice sheets and vegetation, could have an important effect on this temperature sensitivity, but have often been neglected. Here we use a coupled atmosphere-ocean general circulation model to simulate the climate of the mid-Pliocene warm period (about three million years ago), and analyse the forcings and feedbacks that contributed to the relatively warm temperatures. Furthermore, we compare our simulation with proxy records of mid-Pliocene sea surface temperature. Taking these lines of evidence together, we estimate that the response of the Earth system to elevated atmospheric carbon dioxide concentrations is 30-50% greater than the response based on those fast-adjusting components of the climate system that are used traditionally to estimate climate sensitivity. We conclude that targets for the long-term stabilization of atmospheric greenhouse-gas concentrations aimed at preventing a dangerous human interference with the climate system should take into account this higher sensitivity of the Earth system. ?? 2010 Macmillan Publishers Limited. All rights reserved.

  18. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  19. Understanding earth system models: how Global Sensitivity Analysis can help

    Science.gov (United States)

    Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.

  20. Uncertainty and Sensitivity Analysis of Filtration Models for Non-Fickian transport and Hyperexponential deposition

    DEFF Research Database (Denmark)

    Yuan, Hao; Sin, Gürkan

    2011-01-01

    filtration coefficients and the CTRW equation expressed in Laplace space, are selected to simulate eight experiments. These experiments involve both porous media and colloid-medium interactions of different heterogeneity degrees. The uncertainty of elliptic equation predictions with distributed filtration......Uncertainty and sensitivity analyses are carried out to investigate the predictive accuracy of the filtration models for describing non-Fickian transport and hyperexponential deposition. Five different modeling approaches, involving the elliptic equation with different types of distributed...... coefficients is larger than that with a single filtration coefficient. The uncertainties of model predictions from the elliptic equation and CTRW equation in Laplace space are minimal for solute transport. Higher uncertainties of parameter estimation and model outputs are observed in the cases with the porous...

  1. About the use of rank transformation in sensitivity analysis of model output

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Sobol', Ilya M

    1995-01-01

    Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis

  2. Particle transport model sensitivity on wave-induced processes

    Science.gov (United States)

    Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna

    2017-04-01

    Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.

  3. Bicycle Rider Control: Observations, Modeling & Experiments

    OpenAIRE

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby develop well handling bicycles for specific user groups in a much shorter time span. The recent benchmarking of the Whipple bicycle model for the balance and steer of a bicycle is an opening enabling t...

  4. Modelling of isotope exchange experiments in JET

    International Nuclear Information System (INIS)

    Ehrenberg, J.

    1987-01-01

    Isotope exchange experiments from hydrogen to deuterium in JET are theoretically described by employing a simple global isotope exchange model. Experimental results for discharges with limiter temperature around 250 0 C can be approximated by this model if an additional slow diffusion process of hydrogen in the limiter bulk is assumed. In discharges where thermal desorption occurs due to higher limiter temperatures (> or approx. 1000 0 C) (post carbonisation discharges) the change over process seems to be predominantly governed by thermal processes. (orig.)

  5. Sensitivity Study on Aging Elements Using Degradation Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Man-Woong; Lee, Sang-Kyu; Kim, Hyun-Koon; Ryu, Yong-Ho [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Choi, Yong Won; Park, Chang Hwan; Lee, Un Chul [Seoul National Univ., Seoul (Korea, Republic of)

    2008-05-15

    To evaluate the safety margin effects for performance degradation of system and components due to ageing for CANDU reactors, it is required to identify the aging elements for systems and components and to develop the degradation model for each element aimed to predict the aging value during operating year adequately. However, it is recognized that the degradation model is not an independent parameter to assess the evaluation of safety margin change due to ageing. For example, the moderator temperature coefficient (MTC) is an important factor of power distribution and is affected by coolant flow rate. Hence all the aging elements relevant with the flow rate at different system or components could be influenced the MCT. Therefore, it is necessary to identify the major elements affecting the safety margin. In this regard, this study investigate the coupled effect to concern the safety margin using a sensitivity analysis is conducted.

  6. Uncertainty and sensitivity analysis of environmental transport models

    International Nuclear Information System (INIS)

    Margulies, T.S.; Lancaster, L.E.

    1985-01-01

    An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables

  7. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  8. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  10. Bicycle Rider Control : Observations, Modeling & Experiments

    NARCIS (Netherlands)

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby

  11. The Impact of Incorporating Chemistry to Numerical Weather Prediction Models: An Ensemble-Based Sensitivity Analysis

    Science.gov (United States)

    Barnard, P. A.; Arellano, A. F.

    2011-12-01

    Data assimilation has emerged as an integral part of numerical weather prediction (NWP). More recently, atmospheric chemistry processes have been incorporated into NWP models to provide forecasts and guidance on air quality. There is, however, a unique opportunity within this coupled system to investigate the additional benefit of constraining model dynamics and physics due to chemistry. Several studies have reported the strong interaction between chemistry and meteorology through radiation, transport, emission, and cloud processes. To examine its importance to NWP, we conduct an ensemble-based sensitivity analysis of meteorological fields to the chemical and aerosol fields within the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and the Data Assimilation Research Testbed (DART) framework. In particular, we examine the sensitivity of the forecasts of surface temperature and related dynamical fields to the initial conditions of dust and aerosol concentrations in the model over the continental United States within the summer 2008 time period. We use an ensemble of meteorological and chemical/aerosol predictions within WRF-Chem/DART to calculate the sensitivities. This approach is similar to recent ensemble-based sensitivity studies in NWP. The use of an ensemble prediction is appealing because the analysis does not require the adjoint of the model, which to a certain extent becomes a limitation due to the rapidly evolving models and the increasing number of different observations. Here, we introduce this approach as applied to atmospheric chemistry. We also show our initial results of the calculated sensitivities from joint assimilation experiments using a combination of conventional meteorological observations from the National Centers for Environmental Prediction, retrievals of aerosol optical depth from NASA's Moderate Resolution Imaging Spectroradiometer, and retrievals of carbon monoxide from NASA's Measurements of Pollution in the

  12. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  13. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  14. An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.

    Science.gov (United States)

    Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C

    2013-08-01

    To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.

  15. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent

    2010-01-01

    grasps. These affordances are represented probabilistically with grasp densities, which correspond to continuous density functions defined on the space of 6D gripper poses. A grasp density characterizes an object’s grasp affordance; densities are linked to visual stimuli through registration...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...

  16. A position sensitive silicon detector for AEgIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy)

    CERN Document Server

    Gligorova, A

    2014-01-01

    The AEḡIS experiment (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is located at the Antiproton Decelerator (AD) at CERN and studies antimatter. The main goal of the AEḡIS experiment is to carry out the first measurement of the gravitational acceleration for antimatter in Earth’s gravitational field to a 1% relative precision. Such a measurement would test the Weak Equivalence Principle (WEP) of Einstein’s General Relativity. The gravitational acceleration for antihydrogen will be determined using a set of gravity measurement gratings (Moiré deflectometer) and a position sensitive detector. The vertical shift due to gravity of the falling antihydrogen atoms will be detected with a silicon strip detector, where the annihilation of antihydrogen will take place. This poster presents part of the development process of this detector.

  17. Laryngeal sensitivity evaluation and dysphagia: Hospital Sírio-Libanês experience

    Directory of Open Access Journals (Sweden)

    Orlando Parise Junior

    Full Text Available CONTEXT: Laryngeal sensitivity is important in the coordination of swallowing coordination and avoidance of aspiration. OBJECTIVE: To briefly review the physiology of swallowing and report on our experience with laryngeal sensitivity evaluation among patients presenting dysphagia. TYPE OF STUDY: Prospective. SETTING: Endoscopy Department, Hospital Sírio-Libanês. METHODS: Clinical data, endoscopic findings from the larynx and the laryngeal sensitivity, as assessed via the Flexible Endoscopic Evaluation of Swallowing with Sensory Testing (FEESST protocol (using the Pentax AP4000 system, were prospectively studied. The chi-squared and Student t tests were used to compare differences, which were considered significant if p < or = 0.05. RESULTS: The study included 111 patients. A direct association was observed for hyperplasia and hyperemia of the posterior commissure region in relation to globus (p = 0.01 and regurgitation (p = 0.04. Hyperemia of the posterior commissure region had a direct association with sialorrhea (p = 0.03 and an inverse association with xerostomia (p = 0.03. There was a direct association between severe laryngeal sensitivity deficit and previous radiotherapy of the head and neck (p = 0.001. DISCUSSION: These data emphasize the association between proximal gastroesophageal reflux and chronic posterior laryngitis, and suggest that decreased laryngeal sensitivity could be a side effect of radiotherapy. CONCLUSIONS: Even considering that these results are preliminary, the endoscopic findings from laryngoscopy seem to be important in the diagnosis of proximal gastroesophageal reflux. Study of laryngeal sensitivity may have the potential for improving the knowledge and clinical management of dysphagia.

  18. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu

    2016-06-13

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  19. Modeling high-efficiency quantum dot sensitized solar cells.

    Science.gov (United States)

    González-Pedro, Victoria; Xu, Xueqing; Mora-Seró, Iván; Bisquert, Juan

    2010-10-26

    With energy conversion efficiencies in continuous growth, quantum dot sensitized solar cells (QDSCs) are currently under an increasing interest, but there is an absence of a complete model for these devices. Here, we compile the latest developments in this kind of cells in order to attain high efficiency QDSCs, modeling the performance. CdSe QDs have been grown directly on a TiO(2) surface by successive ionic layer adsorption and reaction to ensure high QD loading. ZnS coating and previous growth of CdS were analyzed. Polysulfide electrolyte and Cu(2)S counterelectrodes were used to provide higher photocurrents and fill factors, FF. Incident photon-to-current efficiency peaks as high as 82%, under full 1 sun illumination, were obtained, which practically overcomes the photocurrent limitation commonly observed in QDSCs. High power conversion efficiency of up to 3.84% under full 1 sun illumination (V(oc) = 0.538 V, j(sc) = 13.9 mA/cm(2), FF = 0.51) and the characterization and modeling carried out indicate that recombination has to be overcome for further improvement of QDSC.

  20. Interferometrically stable, enclosed, spinning sample cell for spectroscopic experiments on air-sensitive samples.

    Science.gov (United States)

    Baranov, Dmitry; Hill, Robert J; Ryu, Jisu; Park, Samuel D; Huerta-Viga, Adriana; Carollo, Alexa R; Jonas, David M

    2017-01-01

    In experiments with high photon flux, it is necessary to rapidly remove the sample from the beam and to delay re-excitation until the sample has returned to equilibrium. Rapid and complete sample exchange has been a challenge for air-sensitive samples and for vibration-sensitive experiments. Here, a compact spinning sample cell for air and moisture sensitive liquid and thin film samples is described. The principal parts of the cell are a copper gasket sealed enclosure, a 2.5 in. hard disk drive motor, and a reusable, chemically inert glass sandwich cell. The enclosure provides an oxygen and water free environment at the 1 ppm level, as demonstrated by multi-day tests with sodium benzophenone ketyl radical. Inside the enclosure, the glass sandwich cell spins at ≈70 Hz to generate tangential speeds of 7-12 m/s that enable complete sample exchange at 100 kHz repetition rates. The spinning cell is acoustically silent and compatible with a ±1 nm rms displacement stability interferometer. In order to enable the use of the spinning cell, we discuss centrifugation and how to prevent it, introduce the cycle-averaged resampling rate to characterize repetitive excitation, and develop a figure of merit for a long-lived photoproduct buildup.

  1. Modeling variability in porescale multiphase flow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  2. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  3. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  4. Sensitivity of modeled ozone concentrations to uncertainties in biogenic emissions

    International Nuclear Information System (INIS)

    Roselle, S.J.

    1992-06-01

    The study examines the sensitivity of regional ozone (O3) modeling to uncertainties in biogenic emissions estimates. The United States Environmental Protection Agency's (EPA) Regional Oxidant Model (ROM) was used to simulate the photochemistry of the northeastern United States for the period July 2-17, 1988. An operational model evaluation showed that ROM had a tendency to underpredict O3 when observed concentrations were above 70-80 ppb and to overpredict O3 when observed values were below this level. On average, the model underpredicted daily maximum O3 by 14 ppb. Spatial patterns of O3, however, were reproduced favorably by the model. Several simulations were performed to analyze the effects of uncertainties in biogenic emissions on predicted O3 and to study the effectiveness of two strategies of controlling anthropogenic emissions for reducing high O3 concentrations. Biogenic hydrocarbon emissions were adjusted by a factor of 3 to account for the existing range of uncertainty in these emissions. The impact of biogenic emission uncertainties on O3 predictions depended upon the availability of NOx. In some extremely NOx-limited areas, increasing the amount of biogenic emissions decreased O3 concentrations. Two control strategies were compared in the simulations: (1) reduced anthropogenic hydrocarbon emissions, and (2) reduced anthropogenic hydrocarbon and NOx emissions. The simulations showed that hydrocarbon emission controls were more beneficial to the New York City area, but that combined NOx and hydrocarbon controls were more beneficial to other areas of the Northeast. Hydrocarbon controls were more effective as biogenic hydrocarbon emissions were reduced, whereas combined NOx and hydrocarbon controls were more effective as biogenic hydrocarbon emissions were increased

  5. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  6. The erythrocyte sedimentation rates: some model experiments.

    Science.gov (United States)

    Cerny, L C; Cerny, E L; Granley, C R; Compolo, F; Vogels, M

    1988-01-01

    In order to obtain a better understanding of the erythrocyte sedimentation rate (ESR), several models are presented. The first directs attention to the importance of geometrical models to represent the structure of mixtures. Here it is our intention to understand the effect of the structure on the packing of red blood cells. In this part of the study, "Cheerios" (trademark General Mills) are used as a macroscopic model. It is interesting that a random sampling of "Cheerios" has the same volume distribution curve that is found for erythrocytes with a Coulter Sizing Apparatus. In order to examine the effect of rouleaux formation, the "Cheerios" are stacked one on top of another and then glued. Rouleaux of 2,3,4,5, 7 and 10 discs were used. In order to examine a more realistic biological model, the experiments of Dintenfass were used. These investigations were performed in a split-capillary photo viscometer using whole blood from patients with a variety of diseases. The novel part of this research is the fact that the work was performed at 1g and at near zero gravity in the space shuttle "Discovery." The size of the aggregates and/or rouleaux clearly showed a dependence upon the gravity of the experiment. The purpose of this model was to examine the condition of self-similarity and fractal behavior. Calculations are reported which clearly indicate that there is general agreement in the magnitude of the fractal dimension from the "Cheerios" model, the "Discovery" experiment with those determined with the automatic sedimentimeter. The final aspect of this work examines the surface texture of the sedimention tube. A series of tubes were designed with "roughened" interiors. A comparison of the sedimentation rates clearly indicates a more rapid settling in "roughened" tubes than in ones with a smooth interior surface.

  7. The Dynamic Anaerobic Reactor & Integrated Energy System (DARIES) model: model development, validation, and sensitivity analysis.

    Science.gov (United States)

    Brouwer, A F; Grimberg, S J; Powers, S E

    2012-12-01

    The Dynamic Anaerobic Reactor & Integrated Energy System (DARIES) model has been developed as a biogas and electricity production model of a dairy farm anaerobic digester system. DARIES, which incorporates the Anaerobic Digester Model No. 1 (ADM1) and simulations of both combined heat and power (CHP) and digester heating systems, may be run in either completely mixed or plug flow reactor configurations. DARIES biogas predictions were shown to be statistically coincident with measured data from eighteen full-scale dairy operations in the northeastern United States. DARIES biogas predictions were more accurate than predictions made by the U.S. AgSTAR model FarmWare 3.4. DARIES electricity production predictions were verified against data collected by the NYSERDA DG/CHP Integrated Data System. Preliminary sensitivity analysis demonstrated that DARIES output was most sensitive to influent flow rate, chemical oxygen demand (COD), and biodegradability, and somewhat sensitive to hydraulic retention time and digester temperature.

  8. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  9. Performance Modeling of Mimosa pudica Extract as a Sensitizer for Solar Energy Conversion

    Directory of Open Access Journals (Sweden)

    M. B. Shitta

    2016-01-01

    Full Text Available An organic material is proposed as a sustainable sensitizer and a replacement for the synthetic sensitizer in a dye-sensitized solar cell technology. Using the liquid extract from the leaf of a plant called Mimosa pudica (M. pudica as a sensitizer, the performance characteristics of the extract of M. pudica are investigated. The photo-anode of each of the solar cell sample is passivated with a self-assembly monolayer (SAM from a set of four materials, including alumina, formic acid, gelatine, and oxidized starch. Three sets of five samples of an M. pudica–based solar cell are produced, with the fifth sample used as the control experiment. Each of the solar cell samples has an active area of 0.3848cm2. A two-dimensional finite volume method (FVM is used to model the transport of ions within the monolayer of the solar cell. The performance of the experimentally fabricated solar cells compares qualitatively with the ones obtained from the literature and the simulated solar cells. The highest efficiency of 3% is obtained from the use of the extract as a sensitizer. It is anticipated that the comparison of the performance characteristics with further research on the concentration of M. pudica extract will enhance the development of a reliable and competitive organic solar cell. It is also recommended that further research should be carried out on the concentration of the extract and electrolyte used in this study for a possible improved performance of the cell.

  10. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  11. GCR Environmental Models I: Sensitivity Analysis for GCR Environments

    Science.gov (United States)

    Slaba, Tony C.; Blattnig, Steve R.

    2014-01-01

    Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.

  12. Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2009-04-20

    Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.

  13. Integrated Experiment and Modeling of Insensitive High Explosives

    Science.gov (United States)

    Stewart, D. Scott; Lambert, David E.; Yoo, Sunhee; Lieber, Mark; Holman, Steven

    2009-12-01

    New design paradigms for insensitive high explosives are being sought for use in munitions applications that require enhanced safety, reliability and performance. We describe recent work of our group that uses an integrated approach to develop predictive models, guided by experiments. Insensitive explosive can have relatively longer detonation reaction zones and slower reaction rates than their sensitive counterparts. We employ reactive flow models that are constrained by detonation shock dynamics (DSD) to pose candidate predictive models. We discuss the variation of the pressure dependent reaction rate exponent and reaction order on the length of the supporting reaction zone, the detonation velocity curvature relation, the computed critical energy required for initiation, the relation between the diameter effect curve and the corresponding normal detonation velocity curvature relation.

  14. Debris Thermal Hydraulics Modeling of QUENCH Experiments

    International Nuclear Information System (INIS)

    Kisselev, Arcadi E.; Kobelev, Gennadii V.; Strizhov, Valerii F.; Vasiliev, Alexander D.

    2006-01-01

    Porous debris formation and behavior in QUENCH experiments (QUENCH-02, QUENCH-03) plays a considerable role and its adequate modeling is important for thermal analysis. This work is aimed to the development of a numerical module which is able to model thermal hydraulics and heat transfer phenomena occurring during the high-temperature stage of severe accident with the formation of debris region and molten pool. The original approach for debris evolution is developed from classical principles using a set of parameters including debris porosity; average particle diameter; temperatures and mass fractions of solid, liquid and gas phases; specific interface areas between different phases; effective thermal conductivity of each phase, including radiative heat conductivity; mass and energy fluxes through the interfaces. The debris model is based on the system of continuity, momentum and energy conservation equations, which consider the dynamics of volume-averaged velocities and temperatures of fluid, solid and gaseous phases of porous debris. The different mechanisms of debris formation are considered, including degradation of fuel rods according to temperature criteria, taking into consideration some correlations between rod layers thicknesses; degradation of rod layer structure due to thermal expansion of melted materials inside intact rod cladding; debris formation due to sharp temperature drop of previously melted material due to reflood; and transition to debris of material from elements lying above. The porous debris model was implemented to best estimate numerical code RATEG/SVECHA/HEFEST developed for modeling thermal hydraulics and severe accident phenomena in a reactor. The model is used for calculation of QUENCH experiments. The results obtained by the model are compared to experimental data concerning different aspects of thermal behavior: thermal hydraulics of porous debris, radiative heat transfer in a porous medium, the generalized melting and refreezing

  15. Model of an Evaporating Drop Experiment

    Science.gov (United States)

    Rodriguez, Nicolas

    2017-11-01

    A computational model of an experimental procedure to measure vapor distributions surrounding sessile drops is developed to evaluate the uncertainty in the experimental results. Methanol, which is expected to have predominantly diffusive vapor transport, is chosen as a validation test for our model. The experimental process first uses a Fourier transform infrared spectrometer to measure the absorbance along lines passing through the vapor cloud. Since the measurement contains some errors, our model allows adding random noises to the computational integrated absorbance to mimic this. Then the resulting data are interpolated before passing through a computed tomography routine to generate the vapor distribution. Next, the gradients of the vapor distribution are computed along a given control volume surrounding the drop so that the diffusive flux can be evaluated as the net rate of diffusion out of the control volume. Our model of methanol evaporation shows that the accumulated errors of the whole experimental procedure affect the diffusive fluxes at different control volumes and are sensitive to how the noisy data of integrated absorbance are interpolated. This indicates the importance of investigating a variety of data fitting methods to choose which is best to present the data. Trinity University Mach Fellowship.

  16. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    Science.gov (United States)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  17. Argonne Bubble Experiment Thermal Model Development III

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-11

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development” and “Argonne Bubble Experiment Thermal Model Development II”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at beam power levels between 6 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was recorded. The previous report2 described the Monte-Carlo N-Particle (MCNP) calculations and Computational Fluid Dynamics (CFD) analysis performed on the as-built solution vessel geometry. The CFD simulations in the current analysis were performed using Ansys Fluent, Ver. 17.2. The same power profiles determined from MCNP calculations in earlier work were used for the 12 and 15 kW simulations. The primary goal of the current work is to calculate the temperature profiles for the 12 and 15 kW cases using reasonable estimates for the gas generation rate, based on images of the bubbles recorded during the irradiations. Temperature profiles resulting from the CFD calculations are compared to experimental measurements.

  18. Sensitivity analysis of an individual-based model for simulation of influenza epidemics.

    Directory of Open Access Journals (Sweden)

    Elaine O Nsoesie

    Full Text Available Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty

  19. Evaluating the influence of selected parameters on sensitivity of a numerical model of solidification

    OpenAIRE

    N. Sczygiol; R. Dyja

    2007-01-01

    Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM). The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods....

  20. A common control group - optimising the experiment design to maximise sensitivity.

    Directory of Open Access Journals (Sweden)

    Simon Bate

    Full Text Available Methods for choosing an appropriate sample size in animal experiments have received much attention in the statistical and biological literature. Due to ethical constraints the number of animals used is always reduced where possible. However, as the number of animals decreases so the risk of obtaining inconclusive results increases. By using a more efficient experimental design we can, for a given number of animals, reduce this risk. In this paper two popular cases are considered, where planned comparisons are made to compare treatments back to control and when researchers plan to make all pairwise comparisons. By using theoretical and empirical techniques we show that for studies where all pairwise comparisons are made the traditional balanced design, as suggested in the literature, maximises sensitivity. For studies that involve planned comparisons of the treatment groups back to the control group, which are inherently more sensitive due to the reduced multiple testing burden, the sensitivity is maximised by increasing the number of animals in the control group while decreasing the number in the treated groups.

  1. A common control group - optimising the experiment design to maximise sensitivity.

    Science.gov (United States)

    Bate, Simon; Karp, Natasha A

    2014-01-01

    Methods for choosing an appropriate sample size in animal experiments have received much attention in the statistical and biological literature. Due to ethical constraints the number of animals used is always reduced where possible. However, as the number of animals decreases so the risk of obtaining inconclusive results increases. By using a more efficient experimental design we can, for a given number of animals, reduce this risk. In this paper two popular cases are considered, where planned comparisons are made to compare treatments back to control and when researchers plan to make all pairwise comparisons. By using theoretical and empirical techniques we show that for studies where all pairwise comparisons are made the traditional balanced design, as suggested in the literature, maximises sensitivity. For studies that involve planned comparisons of the treatment groups back to the control group, which are inherently more sensitive due to the reduced multiple testing burden, the sensitivity is maximised by increasing the number of animals in the control group while decreasing the number in the treated groups.

  2. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    Science.gov (United States)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  3. The 'OMITRON' and 'MODEL OMITRON' proposed experiments

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-12-01

    In the present paper the main features of the OMITRON and MODEL OMITRON proposed high field tokamaks are illustrated. Of the two, OMITRON is an ambitious experiment, aimed at attaining plasma burning conditions. its key physics issues are discussed, and a comparison is carried out with corresponding physics features in ignition experiments such as IGNITOR and ITER. Chief asset and chief challenge - in both OMITRON and MODEL OMITRON is the conspicuous 20 Tesla toroidal field value on the plasma axis. The advanced features of engineering which consent such a reward in terms of toroidal magnet performance are discussed in convenient depth and detail. As for the small, propaedeutic device MODEL OMITRON among its goals one must rank the purpose of testing key engineering issues in vivo, which are vital for the larger and more expensive parent device. Besides that, however - as indicated by ad hoc performed scoping studies - the smaller machine is found capable also of a number of quite interesting physics investigations in its own right

  4. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Studying the physics potential of long-baseline experiments in terms of new sensitivity parameters

    International Nuclear Information System (INIS)

    Singh, Mandip

    2016-01-01

    We investigate physics opportunities to constraint the leptonic CP-violation phase δ CP through numerical analysis of working neutrino oscillation probability parameters, in the context of long-baseline experiments. Numerical analysis of two parameters, the “transition probability δ CP phase sensitivity parameter (A M )” and the “CP-violation probability δ CP phase sensitivity parameter (A CP ),” as functions of beam energy and/or baseline have been carried out. It is an elegant technique to broadly analyze different experiments to constrain the δ CP phase and also to investigate the mass hierarchy in the leptonic sector. Positive and negative values of the parameter A CP , corresponding to either hierarchy in the specific beam energy ranges, could be a very promising way to explore the mass hierarchy and δ CP phase. The keys to more robust bounds on the δ CP phase are improvements of the involved detection techniques to explore lower energies and relatively long baseline regions with better experimental accuracy.

  6. A pure shift experiment with increased sensitivity and superior performance for strongly coupled systems

    Science.gov (United States)

    Ilgen, Julian; Kaltschnee, Lukas; Thiele, Christina M.

    2018-01-01

    Motivated by the persisting need for enhanced resolution in solution state NMR spectra, pure shift techniques such as Zangger-Sterk decoupling have recently attracted widespread interest. These techniques for homonuclear decoupling offer enhanced resolution in one- and multidimensional proton detected experiments by simplifying multiplet structures. In this work, a modification to the popular Zangger-Sterk technique PEPSIE (Perfect Echo Pure Shift Improved Experiment) is presented, which decouples pairs of spins even if they share the same volume element. This in turn can drastically improve the sensitivity, as compared to classical Zangger-Sterk decoupling, as larger volume elements can be used to collect the detected signal. Most interestingly, even in the presence of moderate strong coupling, the PEPSIE experiment produces clean and widely artifact free spectra. In order to better understand this - to us initially - surprising behaviour we performed analyses using numerical simulations and derived an (approximate) analytical solution from density matrix formalism. We show that this experiment is particularly suitable to study samples with strong signal clustering, a situation which can render classic Zangger-Sterk decoupling inefficient.

  7. Methane emissions from rice paddies. Experiments and modelling

    International Nuclear Information System (INIS)

    Van Bodegom, P.M.

    2000-01-01

    This thesis describes model development and experimentation on the comprehension and prediction of methane (CH4) emissions from rice paddies. The large spatial and temporal variability in CH4 emissions and the dynamic non-linear relationships between processes underlying CH4 emissions impairs the applicability of empirical relations. Mechanistic concepts are therefore starting point of analysis throughout the thesis. The process of CH4 production was investigated by soil slurry incubation experiments at different temperatures and with additions of different electron donors and acceptors. Temperature influenced conversion rates and the competitiveness of microorganisms. The experiments were used to calibrate and validate a mechanistic model on CH4 production that describes competition for acetate and H2/CO2, inhibition effects and chemolithotrophic reactions. The redox sequence leading eventually to CH4 production was well predicted by the model, calibrating only the maximum conversion rates. Gas transport through paddy soil and rice plants was quantified by experiments in which the transport of SF6 was monitored continuously by photoacoustics. A mechanistic model on gas transport in a flooded rice system based on diffusion equations was validated by these experiments and could explain why most gases are released via plant mediated transport. Variability in root distribution led to highly variable gas transport. Experiments showed that CH4 oxidation in the rice rhizosphere was oxygen (O2) limited. Rice rhizospheric O2 consumption was dominated by chemical iron oxidation, and heterotrophic and methanotrophic respiration. The most abundant methanotrophs and heterotrophs were isolated and kinetically characterised. Based upon these experiments it was hypothesised that CH4 oxidation mainly occurred at microaerophilic, low acetate conditions not very close to the root surface. A mechanistic rhizosphere model that combined production and consumption of O2, carbon and iron

  8. A model for perception-based identification of sensitive skin.

    Science.gov (United States)

    Richters, R J H; Uzunbajakava, N E; Hendriks, J C M; Bikker, J-W; van Erp, P E J; van de Kerkhof, P C M

    2017-02-01

    With high prevalence of sensitive skin (SS), lack of strong evidence on pathomechanisms, consensus on associated symptoms, proof of existence of 'general' SS and tools to recruit subjects, this topic attracts increasing attention of research. To create a model for selecting subjects in studies on SS by identifying a complete set of self-reported SS characteristics and factors discriminatively describing it. A survey (n = 3058) was conducted, comprising questions regarding socio-demographics, atopy, skin characteristics, personal care, degree of self-assessed SS and subjective and objective reactions to endogenous and exogenous factors. Exploratory factor analysis on 481 questionnaires was performed to identify underlying dimensions and multivariate logistic regression to find contributing variables to the likelihood of reporting SS. The prevalence of SS was found to be 41%, and 56% of SS subjects reports a concomitant atopic condition. The most discriminative were the eliciting factors toiletries and emotions, and not specific skin symptoms in general. Triggers of different origins seem to elicit SS, it is not defined by concomitant skin diseases only, suggesting existence of 'general' SS. A multifactorial questionnaire could be a better diagnostic than a one-dimensional provocative test. © 2016 European Academy of Dermatology and Venereology.

  9. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  10. Feedbacks, climate sensitivity, and the limits of linear models

    Science.gov (United States)

    Rugenstein, M.; Knutti, R.

    2015-12-01

    The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.

  11. Sex and smoking sensitive model of radon induced lung cancer

    International Nuclear Information System (INIS)

    Zhukovsky, M.; Yarmoshenko, I.

    2006-01-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  12. A proposed experiment on ball lightning model

    Energy Technology Data Exchange (ETDEWEB)

    Ignatovich, Vladimir K., E-mail: v.ignatovi@gmail.com [Frank Laboratory for Neutron Physics, Joint Institute for Nuclear Research, Dubna 141980 (Russian Federation); Ignatovich, Filipp V. [1565 Jefferson Rd., 420, Rochester, NY 14623 (United States)

    2011-09-19

    Highlights: → We propose to put a glass sphere inside an excited gas. → Then to put a light ray inside the glass in a whispering gallery mode. → If the light is resonant to gas excitation, it will be amplified at every reflection. → In ms time the light in the glass will be amplified, and will melt the glass. → A liquid shell kept integer by electrostriction forces is the ball lightning model. -- Abstract: We propose an experiment for strong light amplification at multiple total reflections from active gaseous media.

  13. Experiments for foam model development and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F. (Honeywell Federal Manufacturing and Technologies, Kansas City Plant, Kansas City, MO); Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  14. Experience with the CMS Event Data Model

    Energy Technology Data Exchange (ETDEWEB)

    Elmer, P.; /Princeton U.; Hegner, B.; /CERN; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  15. Implementation of the model project: Ghanaian experience

    International Nuclear Information System (INIS)

    Schandorf, C.; Darko, E.O.; Yeboah, J.; Asiamah, S.D.

    2003-01-01

    Upgrading of the legal infrastructure has been the most time consuming and frustrating part of the implementation of the Model project due to the unstable system of governance and rule of law coupled with the low priority given to legislation on technical areas such as safe applications of Nuclear Science and Technology in medicine, industry, research and teaching. Dwindling Governmental financial support militated against physical and human resource infrastructure development and operational effectiveness. The trend over the last five years has been to strengthen the revenue generation base of the Radiation Protection Institute through good management practices to ensure a cost effective use of the limited available resources for a self-reliant and sustainable radiation and waste safety programme. The Ghanaian experience regarding the positive and negative aspects of the implementation of the Model Project is highlighted. (author)

  16. Bucky gel actuator displacement: experiment and model

    International Nuclear Information System (INIS)

    Ghamsari, A K; Zegeye, E; Woldesenbet, E; Jin, Y

    2013-01-01

    Bucky gel actuator (BGA) is a dry electroactive nanocomposite which is driven with a few volts. BGA’s remarkable features make this tri-layered actuator a potential candidate for morphing applications. However, most of these applications would require a better understanding of the effective parameters that influence the BGA displacement. In this study, various sets of experiments were designed to investigate the effect of several parameters on the maximum lateral displacement of BGA. Two input parameters, voltage and frequency, and three material/design parameters, carbon nanotube type, thickness, and weight fraction of constituents were selected. A new thickness ratio term was also introduced to study the role of individual layers on BGA displacement. A model was established to predict BGA maximum displacement based on the effect of these parameters. This model showed good agreement with reported results from the literature. In addition, an important factor in the design of BGA-based devices, lifetime, was investigated. (paper)

  17. Forces between permanent magnets: experiments and model

    International Nuclear Information System (INIS)

    González, Manuel I

    2017-01-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r −4 at large distances, as expected. (paper)

  18. Forces between permanent magnets: experiments and model

    Science.gov (United States)

    González, Manuel I.

    2017-03-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.

  19. Multivariate models for skin sensitization hazard and potency

    Science.gov (United States)

    One of the top priorities being addressed by ICCVAM is the identification and validation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events have been well characterized in an adverse outcome pathw...

  20. Model sensitivity studies of the decrease in atmospheric carbon tetrachloride

    Directory of Open Access Journals (Sweden)

    M. P. Chipperfield

    2016-12-01

    Full Text Available Carbon tetrachloride (CCl4 is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % of total, but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years. With an assumed CCl4 emission rate of 39 Gg year−1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year−1. Further progress in constraining the CCl4 budget is partly limited by

  1. Modeling reproducibility of porescale multiphase flow experiments

    Science.gov (United States)

    Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.

    2017-12-01

    Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  2. Use of Data Denial Experiments to Evaluate ESA Forecast Sensitivity Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Zack, J; Natenberg, E J; Knowe, G V; Manobianco, J; Waight, K; Hanley, D; Kamath, C

    2011-09-13

    wind speed and vertical temperature difference. Ideally, the data assimilation scheme used in the experiments would have been based upon an ensemble Kalman filter (EnKF) that was similar to the ESA method used to diagnose the Mid-Colombia Basin sensitivity patterns in the previous studies. However, the use of an EnKF system at high resolution is impractical because of the very high computational cost. Thus, it was decided to use the three-dimensional variational analysis data assimilation that is less computationally intensive and more economically practical for generating operational forecasts. There are two tasks in the current project effort designed to validate the ESA observational system deployment approach in order to move closer to the overall goal: (1) Perform an Observing System Experiment (OSE) using a data denial approach which is the focus of this task and report; and (2) Conduct a set of Observing System Simulation Experiments (OSSE) for the Mid-Colombia basin region. The results of this task are presented in a separate report. The objective of the OSE task involves validating the ESA-MOOA results from the previous sensitivity studies for the Mid-Columbia Basin by testing the impact of existing meteorological tower measurements on the 0- to 6-hour ahead 80-m wind forecasts at the target locations. The testing of the ESA-MOOA method used a combination of data assimilation techniques and data denial experiments to accomplish the task objective.

  3. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  4. Pain experiences of patients with musculoskeletal pain + central sensitization: A comparative Group Delphi Study.

    Science.gov (United States)

    Schäfer, Axel Georg Meender; Joos, Leonie Johanna; Roggemann, Katharina; Waldvogel-Röcker, Kerstin; Pfingsten, Michael; Petzke, Frank

    2017-01-01

    Central sensitization (CS) is regarded as an important contributing factor for chronification of musculoskeletal pain (MSP). It is crucial to identify CS, as targeted multimodal treatment may be indicated. The primary objective of this study was therefore to explore pain experience of individuals with MSP+CS in order to gain a better understanding of symptoms in relation to CS from a patient perspective. The secondary objective was to investigate whether pain experiences of patients with MSP+CS differ from those of individuals with neuropathic pain (NP). We conducted a comparative Group Delphi Study including patients with MSP+CS and neuropathic pain (NP). 13 guiding questions were used to gather information about sensory discriminatory, affective and associated bodily, mental and emotional phenomena related to the pain experience of patients. Descriptions were categorized using qualitative content analysis. Additionally, patients completed several pain related questionnaires. Nine participants with MSP+CS and nine participants with NP participated. The Delphi procedure revealed three main themes: psycho-emotional factors, bodily factors and environmental factors. Descriptions of patients with MSP+CS showed a complex picture, psycho-emotional factors seem to have a considerable impact on pain provocation, aggravation and relief. Impairments associated with mental ability and psyche affected many aspects of daily life. In contrast, descriptions of patients with NP revealed a rather mechanistic and bodily oriented pain experience. Patients with MSP+CS reported distinct features in relation to their pain that were not captured with current questionnaires. Insight in patient's pain experience may help to choose and develop appropriate diagnostic instruments.

  5. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  6. Mathematical Model of Nicholson’s Experiment

    Directory of Open Access Journals (Sweden)

    Sergey D. Glyzin

    2017-01-01

    Full Text Available Considered  is a mathematical model of insects  population dynamics,  and  an attempt is made  to explain  classical experimental results  of Nicholson with  its help.  In the  first section  of the paper  Nicholson’s experiment is described  and dynamic  equations  for its modeling are chosen.  A priori estimates  for model parameters can be made more precise by means of local analysis  of the  dynamical system,  that is carried  out in the second section.  For parameter values found there  the stability loss of the  problem  equilibrium  of the  leads to the  bifurcation of a stable  two-dimensional torus.   Numerical simulations  based  on the  estimates  from the  second section  allows to explain  the  classical Nicholson’s experiment, whose detailed  theoretical substantiation is given in the last section.  There for an atrractor of the  system  the  largest  Lyapunov  exponent is computed. The  nature of this  exponent change allows to additionally narrow  the area of model parameters search.  Justification of this experiment was made possible  only  due  to  the  combination of analytical and  numerical  methods  in studying  equations  of insects  population dynamics.   At the  same time,  the  analytical approach made  it possible to perform numerical  analysis  in a rather narrow  region of the  parameter space.  It is not  possible to get into this area,  based only on general considerations.

  7. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    Science.gov (United States)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  8. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  9. Sensitivity analysis of efficiency thermal energy storage on selected rock mass and grout parameters using design of experiment method

    International Nuclear Information System (INIS)

    Wołoszyn, Jerzy; Gołaś, Andrzej

    2014-01-01

    Highlights: • Paper propose a new methodology to sensitivity study of underground thermal storage. • Using MDF model and DOE technique significantly shorter of calculations time. • Calculation of one time step was equal to approximately 57 s. • Sensitivity study cover five thermo-physical parameters. • Conductivity of rock mass and grout material have a significant impact on efficiency. - Abstract: The aim of this study was to investigate the influence of selected parameters on the efficiency of underground thermal energy storage. In this paper, besides thermal conductivity, the effect of such parameters as specific heat, density of the rock mass, thermal conductivity and specific heat of grout material was investigated. Implementation of this objective requires the use of an efficient computational method. The aim of the research was achieved by using a new numerical model, Multi Degree of Freedom (MDF), as developed by the authors and Design of Experiment (DoE) techniques with a response surface. The presented methodology can significantly reduce the time that is needed for research and to determine the effect of various parameters on the efficiency of underground thermal energy storage. Preliminary results of the research confirmed that thermal conductivity of the rock mass has the greatest impact on the efficiency of underground thermal energy storage, and that other parameters also play quite significant role

  10. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA

    2017-05-01

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.

  11. IATA-Bayesian Network Model for Skin Sensitization Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Since the publication of the Adverse Outcome Pathway (AOP) for skin sensitization, there have been many efforts to develop systematic approaches to integrate the...

  12. Normative and descriptive models of decision making: time discounting and risk sensitivity.

    Science.gov (United States)

    Kacelnik, A

    1997-01-01

    The task of evolutionary psychologists is to produce precise predictions about psychological mechanisms using adaptationist thinking. This can be done combining normative models derived from evolutionary hypotheses with descriptive regularities across species found by experimental psychologists and behavioural ecologists. I discuss two examples. In temporal discounting, a normative model (exponential) fails while a descriptive one (hyperbolic) fits both human and non-human data. In non-humans hyperbolic discounting coincides with rate of gain maximization in repetitive choices. Humans may discount hyperbolically in non-repetitive choices because they treat them as a repetitive rate-maximizing problem. In risk sensitivity, a theory derived from fitness considerations produces inconclusive results in non-humans, but succeeds in predicting human risk proneness and risk aversion for both the amount and delay of reward in a computer game. Strikingly, and in contrast with the existing literature, risk aversion for delay occurs as predicted. The predictions of risk aversion for delay may fail in many animal experiments because the manipulations of the utility function are not appropriate. In temporal discounting animal experiments help the interpretation of human results, while in risk sensitivity studies human results help the analysis of non-human data.

  13. Wind climate estimation using WRF model output: method and model sensitivities over the sea

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Vincent, Claire Louise; Peña, Alfredo

    2015-01-01

    setup parameters. The results of the year-long sensitivity simulations show that the long-term mean wind speed simulated by the WRF model offshore in the region studied is quite insensitive to the global reanalysis, the number of vertical levels, and the horizontal resolution of the sea surface...... temperature used as lower boundary conditions. Also, the strength and form (grid vs spectral) of the nudging is quite irrelevant for the mean wind speed at 100 m. Large sensitivity is found to the choice of boundary layer parametrization, and to the length of the period that is discarded as spin-up to produce...

  14. A model to estimate insulin sensitivity in dairy cows

    Directory of Open Access Journals (Sweden)

    Holtenius Kjell

    2007-10-01

    Full Text Available Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI is based on plasma concentrations of glucose, insulin and free fatty acids (FFA and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.

  15. The use of graph theory in the sensitivity analysis of the model output: a second order screening method

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Braddock, Roger

    1999-01-01

    Sensitivity analysis screening methods aim to isolate the most important factors in experiments involving a large number of significant factors and interactions. This paper extends the one-factor-at-a-time screening method proposed by Morris. The new method, in addition to the 'overall' sensitivity measures already provided by the traditional Morris method, offers estimates of the two-factor interaction effects. The number of model evaluations required is O(k 2 ), where k is the number of model input factors. The efficient sampling strategy in the parameter space is based on concepts of graph theory and on the solution of the 'handcuffed prisoner problem'

  16. Satellite Imagery Application: An Experience In Environmental Sensitivity Index Mapping In Nigeria

    International Nuclear Information System (INIS)

    Abolarin, A.A.O.

    1995-01-01

    Pre-planning for response to emergency, most often, dictates the degree of actual response success, within the region of 'certainty' in risk management. Contingency planning against oil spillage has been recognised as a vital tool in the oil spillage has been recognised as a vital tool in the oil industry. A number of inputs are necessary for an effective Contingency Planning. One of such inputs is the identification of priority areas to be protected or to be allowed only the minimum exposure in the event of a spillage. A modern tool for this prioritizing activity, which is constantly gaining patronage, is the Environmental Sensitivity Index (ESI) Mapping. Satellites have become invaluable sources of information for the indexing and classification purpose. They provide remotely sensed data which could otherwise be obtained at greater costs, at least, in time and money. This paper summarises the Elf Petroleum Nigeria's experience with satellite imagery application for environmental sensitivity indexing purposes. This includes the case studies of the NNPC/Elf OML's 57 (swamp), 58 (land) and 100 (offshore). It provides some background to the technology's data acquisition, and the dilemma of indexing. It is expected that the paper would serve educational and corporate purposes in the industry

  17. Photogrammetry experiments with a model eye.

    Science.gov (United States)

    Rosenthal, A R; Falconer, D G; Pieper, I

    1980-01-01

    Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139

  18. Effective Moisture Penetration Depth Model for Residential Buildings: Sensitivity Analysis and Guidance on Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, Jason D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Winkler, Jonathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-31

    Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to the interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.

  19. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  1. Modelling high Arctic deep permafrost temperature sensitivity in Northeast Greenland based on experimental and field observations

    Science.gov (United States)

    Rasmussen, Laura Helene; Zhang, Wenxin; Hollesen, Jørgen; Cable, Stefanie; Hvidtfeldt Christiansen, Hanne; Jansson, Per-Erik; Elberling, Bo

    2017-04-01

    Permafrost affected areas in Greenland are expected to experience a marked temperature increase within decades. Most studies have considered near-surface permafrost sensitivity, whereas permafrost temperatures below the depths of zero annual amplitude is less studied despite being closely related to changes in near-surface conditions, such as changes in active layer thermal properties, soil moisture and snow depth. In this study, we measured the sensitivity of thermal conductivity (TC) to gravimetric water content (GWC) in frozen and thawed permafrost sediments from fine-sandy and gravelly deltaic and fine-sandy alluvial deposits in the Zackenberg valley, NE Greenland. We further calibrated a coupled heat and water transfer model, the "CoupModel", for one central delta sediment site with average snow depth and further forced it with meteorology from a nearby delta sediment site with a topographic snow accumulation. With the calibrated model, we simulated deep permafrost thermal dynamics in four 20-year scenarios with changes in surface temperature and active layer (AL) soil moisture: a) 3 °C warming and AL water table at 0.5 m depth; b) 3 °C warming and AL water table at 0.1 m depth; c) 6 °C warming and AL water table at 0.5 m depth and d) 6 °C warming and AL water table at 0.1 m depth. Our results indicate that frozen sediments have higher TC than thawed sediments. All sediments show a positive linear relation between TC and soil moisture when frozen, and a logarithmic one when thawed. Gravelly delta sediments were highly sensitive, but never reached above 12 % GWC, indicating a field effect of water retention capacity. Alluvial sediments are less sensitive to soil moisture than deltaic (fine and coarse) sediments, indicating the importance of unfrozen water in frozen sediment. The deltaic site with snow accumulation had 1 °C higher mean annual ground temperature than the average snow depth site. Permafrost temperature at the depth of 18 m increased with 1

  2. The Emergence of Synaesthesia in a Neuronal Network Model via Changes in Perceptual Sensitivity and Plasticity.

    Directory of Open Access Journals (Sweden)

    Oren Shriki

    2016-07-01

    Full Text Available Synaesthesia is an unusual perceptual experience in which an inducer stimulus triggers a percept in a different domain in addition to its own. To explore the conditions under which synaesthesia evolves, we studied a neuronal network model that represents two recurrently connected neural systems. The interactions in the network evolve according to learning rules that optimize sensory sensitivity. We demonstrate several scenarios, such as sensory deprivation or heightened plasticity, under which synaesthesia can evolve even though the inputs to the two systems are statistically independent and the initial cross-talk interactions are zero. Sensory deprivation is the known causal mechanism for acquired synaesthesia and increased plasticity is implicated in developmental synaesthesia. The model unifies different causes of synaesthesia within a single theoretical framework and repositions synaesthesia not as some quirk of aberrant connectivity, but rather as a functional brain state that can emerge as a consequence of optimising sensory information processing.

  3. Sensitivity of Climate Change on Diapycnal Diffusion in Global Warming Experiments

    Science.gov (United States)

    Dalan, F.; Stone, P. H.; Sokolov, A.

    2002-12-01

    This study seeks understanding of the role played by the diapycnal diffusivity in determining the transient climate evolution in a scenario with enhanced atmospheric CO2 concentration. We use an Earth system Model of Intermediate Complexity (EMIC) composed of a 3D Ocean Model with idealized topography and a 2D Atmospheric Model. The model is spun up to equilibrium for three different values of the diapycnal diffusivity (small k=0.2 cm2/s, standard k=0.5 cm2/s and large k=1.0 cm2/s) and global warming experiments are performed after the spinup. Three different climatic forcing scenarios are applied to each equilibrium state: CO2 increases at a rate of 1%, 2% and 4% per year for 75 years and constant afterwards. Comparing the climate change of these experiments allows one to detect eventual non-linear behavior of the climate system induced by the different values of the diapycnal diffusivity. The major differences in the transient runs are found in the mid-high latitudes in the North Atlantic Ocean but strong non-linear behavior has not been found. For the scenarios with 1% (2%) CO2 increase per year, the Meridional Overturning Circulation (MOC) slows down by about 25% (50%) of the control value after about 100 years of integration and then it recovers. The recovery is complete or almost complete (80-90% of the initial value) depending on the value of the diffusivity and the strength of the forcing scenario. The natural variability of the MOC seems to be higher for both the standard and large diffusivity models, as compared to the small diffusivity model. This is true both for the control climate and the global warming climate. For the scenario with 4% CO2 increase per year the circulation shuts down for 150 years in the small diffusivity model and then recovers while in the higher diffusivity models it slows for about 4 decades to around 5 [Sv] and then recovers. In a 2XCO2 global warming experiment Kamenkovich et al (Climate Dynamics, submitted) proved that the

  4. Adjoint-based sensitivities and data assimilation with a time-dependent marine ice sheet model

    Science.gov (United States)

    Goldberg, Dan; Heimbach, Patrick

    2013-04-01

    To date, assimilation of observational data using large-scale ice models has consisted only of time-dependent inversions of surface velocities for basal traction, bed elevation, or ice stiffness. These inversions are for the most part based on control methods (Macayeal D R, 1992, A tutorial on the use of control methods in ice sheet modeling), which involve generating and solving the adjoint of the ice model. Quite a lot has been learned about the fast-flowing parts of the Antarctic Ice Sheet from such inversions. Still, there are limitations to these "snapshot" inversions. For instance, they cannot capture time-dependent dynamics, such as propagation of perturbations through the ice sheet. They cannot assimilate time-dependent observations, such as surface elevation changes. And they are problematic for initializing time-dependent ice sheet models, as such initializations may contain considerable model drift. We have developed an adjoint for a time-dependent land ice model, with which we will address such issues. The land ice model implements a hybrid shallow shelf-shallow ice stress balance and can represent the floating, fast-sliding, and frozen bed regimes of a marine ice sheet. The adjoint is generated by a combination of analytic methods and the use of automated differentiation (AD) software. Experiments with idealized geometries have been carried out; adjoint sensitivities reveal the "vulnerable" regions of ice shelves, and preliminary inversions of "synthetic" observations (e.g. simultaneous inversion of basal traction and topography) yield encouraging results.

  5. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  6. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  7. The relationship between experiences of discrimination and mental health among lesbians and gay men: An examination of internalized homonegativity and rejection sensitivity as potential mechanisms.

    Science.gov (United States)

    Feinstein, Brian A; Goldfried, Marvin R; Davila, Joanne

    2012-10-01

    The current study used path analysis to examine potential mechanisms through which experiences of discrimination influence depressive and social anxiety symptoms. The sample included 218 lesbians and 249 gay men (total N = 467) who participated in an online survey about minority stress and mental health. The proposed model included 2 potential mediators-internalized homonegativity and rejection sensitivity-as well as a culturally relevant antecedent to experiences of discrimination-childhood gender nonconformity. Results indicated that the data fit the model well, supporting the mediating roles of internalized homonegativity and rejection sensitivity in the associations between experiences of discrimination and symptoms of depression and social anxiety. Results also supported the role of childhood gender nonconformity as an antecedent to experiences of discrimination. Although there were not significant gender differences in the overall model fit, some of the associations within the model were significantly stronger for gay men than lesbians. These findings suggest potential mechanisms through which experiences of discrimination influence well-being among sexual minorities, which has important implications for research and clinical practice with these populations. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  8. Characteristics of coupled atmosphere-ocean CO2 sensitivity experiments with different ocean formulations

    International Nuclear Information System (INIS)

    Washington, W.M.; Meehl, G.A.

    1990-01-01

    The Community Climate Model at the National Center for Atmospheric Research has been coupled to a simple mixed-layer ocean model and to a coarse-grid ocean general circulation model (OGCM). This paper compares the responses of simulated climate to increases of atmospheric carbon dioxide (CO 2 ) in these two coupled models. Three types of simulations were run: (1) control runs with both ocean models, with CO 2 held constant at present-day concentrations, (2) instantaneous doubling of atmospheric CO 2 (from 330 to 660 ppm) with both ocean models, and (3) a gradually increasing (transient) CO 2 concentration starting at 330 ppm and increasing linearly at 1% per year, with the OGCM. The mixed-layer and OGCM cases exhibit increases of 3.5 C and 1.6 C, respectively, in globally averaged surface air temperature for the instantaneous doubling cases. The transient-forcing case warms 0.7 C by the end of 30 years. The mixed-layer ocean yields warmer-than-observed tropical temperatures and colder-than-observed temperatures in the higher latitudes. The coarse-grid OGCM simulates lower-than-observed sea surface temperatures (SSTs) in the tropics and higher-than-observed SSTs and reduced sea-ice extent at higher latitudes. Sensitivity in the OGCM after 30 years is much lower than in simulations with the same atmosphere coupled to a 50-m slab-ocean mixed layer. The OGCM simulates a weaker thermohaline circulation with doubled CO 2 as the high-latitude ocean-surface layer warms and freshens and the westerly wind stress decreases. Convective overturning in the OGCM decreases substantially with CO 2 warming

  9. Characteristics of coupled atmosphere-ocean CO2 sensitivity experiments with different ocean formulations

    International Nuclear Information System (INIS)

    Washington, W.M.; Meehl, G.A.

    1991-01-01

    The Community Climate Model at the National Center for Atmospheric Research has been coupled to a simple mixed-layer ocean model and to a coarse-grid ocean general circulation model (OGCM). This paper compares the responses of simulated climate to increases of atmospheric carbon dioxide (CO 2 ) in these two coupled models. Three types of simulations were run: (1) control runs with both ocean models, with CO 2 held constant at present-day concentrations, (2) instantaneous doubling of atmospheric CO 2 (from 330 to 660 ppm) with both ocean models, and (3) a gradually increasing (transient) CO 2 concentration starting at 330 ppm and increasing linearly at 1% per year, with the OGCM. The mixed-layer and OGCM cases exhibit increases of 3.5 C and 1.6 C, respectively, in globally averaged surface air temperature for the instantaneous doubling cases. The transient-forcing case warms 0.7 C by the end of 30 years. The mixed-layer ocean yields warmer-than-observed tropical temperatures and colder-than-observed temperatures in the higher latitudes. The coarse-grid OGCM simulates lower-than-observed sea surface temperatures (SSTs) in the tropics and higher-than-observed SSTs and reduced sea-ice extent at higher latitudes. Sensitivity in the OGCM after 30 years is much lower than in simulations with the same atmosphere coupled to a 50-m slab-ocean mixed layer. The OGCM simulates a weaker thermohaline circulation with doubled CO 2 as the high-latitude ocean-surface layer warms and freshens and the westerly wind stress decreases. Convective overturning in the OGCM decreases substantially with CO 2 warming. 46 refs.; 20 figs.; 1 tab

  10. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  11. Mathematical modeling of a fluidized bed rice husk gasifier: Part 2 - Model sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Mansaray, K.G.; Ghaly, A.E.; Al-Taweel, A.M.; Hamdullahpur, F.; Ugursal, V.I.

    2000-02-01

    The performance of two thermodynamic models (one-compartment and two-compartment models), developed for fluidized bed gasification of rice husk, was analyzed and compared in terms of their predictive capabilities of the product gas composition. The two-compartment model was the most adequate to simulate the fluidized bed gasification of rice husk, since the complex hydrodynamics present in the fluidized bed gasifier were taken into account. Therefore, the two-compartment model was tested under a wide range of parameters, including bed height, fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and rice husk moisture content. The model sensitivity analysis showed that changes in bed height had a significant effect on the reactor temperatures, but only a small effect on the gas composition, higher heating value, and overall carbon conversion. The fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and moisture content in rice husk had dramatic effects on the gasifier performance. However, the model was more sensitive to variations in the equivalence ratio and oxygen concentration in the fluidizing gas. (Author)

  12. Mathematical modeling of a fluidized bed rice husk gasifier: Part 2 -- Model sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Mansaray, K.G.; Ghaly, A.E.; Al-Taweel, A.M.; Hamdullahpur, F.; Ugursal, V.I.

    2000-03-01

    The performance of two thermodynamic models (one-compartment and two-compartment models), developed for fluidized bed gasification of rice husk, was analyzed and compared in terms of their predictive capabilities of the product gas composition. The two-compartment model was the most adequate to simulate the fluidized bed gasification of rice husk, since the complex hydrodynamics present in the fluidized bed gasifier were taken into account. Therefore, the two-compartment model was tested under a wide range of parameters, including bed height, fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and rice husk moisture content. The model sensitivity analysis showed that changes in bed height had a significant effect on the reactor temperatures, but only a small effect on the gas composition, higher heating value, and overall carbon conversion. The fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and moisture content in rice husk had dramatic effects on the gasifier performance. However, the model was more sensitive to variations in the equivalence ratio and oxygen concentration in the fluidizing gas.

  13. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  14. The CANDELLE experiment for characterization of neutron sensitivity of LiF TLDs

    Directory of Open Access Journals (Sweden)

    Guillou M.Le

    2018-01-01

    Full Text Available As part of the design studies conducted at CEA for future power and research nuclear reactors, the validation of neutron and photon calculation schemes related to nuclear heating prediction are strongly dependent on the implementation of nuclear heating measurements. Such measurements are usually performed in low-power reactors, whose core dimensions are accurately known and where irradiation conditions (power, flux and temperature are entirely controlled. Due to the very low operating power of such reactors (of the order of 100 W, nuclear heating is assessed by using dosimetry techniques such as thermoluminescent dosimeters (TLDs. However, although they are highly sensitive to gamma radiation, such dosimeters are also, to a lesser extent, sensitive to neutrons. The neutron dose depends strongly on the TLD composition, typically contributing to 10-30% of the total measured dose in a mixed neutron/gamma field. The experimental determination of the neutron correction appears therefore to be crucial to a better interpretation of doses measured in reactor with reduced uncertainties. A promising approach based on the use of two types of LiF TLDs respectively enriched with lithium-6 and lithium-7, precalibrated both in photon and neutron fields, has been recently developed at INFN (Milan, Italy for medical purposes. The CANDELLE experiment is dedicated to the implementation of a pure neutron field “calibration” of TLDs by using the GENEPI-2 neutron source of LPSC (Grenoble, France. Those irradiation conditions allowed providing an early assessment of the neutron components of doses measured in EOLE reactor at CEA Cadarache with 10% uncertainty at 1σ.

  15. The CANDELLE experiment for characterization of neutron sensitivity of LiF TLDs

    Science.gov (United States)

    Guillou, M. Le; Billebaud, A.; Gruel, A.; Kessedjian, G.; Méplan, O.; Destouches, C.; Blaise, P.

    2018-01-01

    As part of the design studies conducted at CEA for future power and research nuclear reactors, the validation of neutron and photon calculation schemes related to nuclear heating prediction are strongly dependent on the implementation of nuclear heating measurements. Such measurements are usually performed in low-power reactors, whose core dimensions are accurately known and where irradiation conditions (power, flux and temperature) are entirely controlled. Due to the very low operating power of such reactors (of the order of 100 W), nuclear heating is assessed by using dosimetry techniques such as thermoluminescent dosimeters (TLDs). However, although they are highly sensitive to gamma radiation, such dosimeters are also, to a lesser extent, sensitive to neutrons. The neutron dose depends strongly on the TLD composition, typically contributing to 10-30% of the total measured dose in a mixed neutron/gamma field. The experimental determination of the neutron correction appears therefore to be crucial to a better interpretation of doses measured in reactor with reduced uncertainties. A promising approach based on the use of two types of LiF TLDs respectively enriched with lithium-6 and lithium-7, precalibrated both in photon and neutron fields, has been recently developed at INFN (Milan, Italy) for medical purposes. The CANDELLE experiment is dedicated to the implementation of a pure neutron field "calibration" of TLDs by using the GENEPI-2 neutron source of LPSC (Grenoble, France). Those irradiation conditions allowed providing an early assessment of the neutron components of doses measured in EOLE reactor at CEA Cadarache with 10% uncertainty at 1σ.

  16. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    Energy Technology Data Exchange (ETDEWEB)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  17. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  18. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  19. Global sensitivity analysis of thermo-mechanical models in numerical weld modelling

    International Nuclear Information System (INIS)

    Petelet, M.

    2007-10-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range ! This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  20. Modeling thermal dilepton radiation for SIS experiments

    Energy Technology Data Exchange (ETDEWEB)

    Seck, Florian [TU Darmstadt (Germany); Collaboration: HADES-Collaboration

    2016-07-01

    Dileptons are radiated during the whole time evolution of a heavy-ion collision and leave the interaction zone unaffected. Thus they carry valuable information about the hot and dense medium created in those collisions to the detector. Realistic dilepton emission rates and an accurate description of the fireball's space-time evolution are needed to properly describe the contribution of in-medium signals to the dilepton invariant mass spectrum. In this presentation we demonstrate how this can be achieved at SIS collision energies. The framework is implemented into the event generator Pluto which is used by the HADES and CBM experiments to produce their hadronic freeze-out cocktails. With the help of an coarse-graining approach to model the fireball evolution and pertinent dilepton rates via a parametrization of the Rapp-Wambach in-medium ρ meson spectral function, the thermal contribution to the spectrum can be calculated. The results also enable us to get an estimate of the fireball lifetime at SIS18 energies.

  1. Evaluation and Sensitivity of Climate Model Representation of Upper Arctic Hydrography

    Science.gov (United States)

    DiMaggio, D.; Maslowski, W.; Osinski, R.; Roberts, A.; Clement Kinney, J. L.; Frants, M.

    2016-12-01

    The satellite-derived rate of Arctic sea ice extent decline for the past decades is faster than those simulated by the models participating in the Coupled Model Intercomparison Project (CMIP5). In addition, time-varying Arctic sea ice concentration and thickness distribution in those models are often poorly represented, suggesting that predicted sea ice decline might be modeled in the wrong place or time and for the wrong reasons. We hypothesize that these limitations are in part the result of an inadequate representation of critical high-latitude processes controlling the accumulation and distribution of sub-surface oceanic heat content and its interaction with the sea ice cover, especially in the western Arctic. For the purpose of this study, we define the sub-surface ocean as that below the surface mixed layer and above the Atlantic layer. Those limitations are evidenced in the CMIP5 multi-model mean exhibiting a cold temperature bias near the surface and a warm bias at intermediate depths. In particular, CMIP5 models are found to be inadequately representing the key features of the upper ocean hydrography in the Canada Basin, including the near-surface temperature maximum (NSTM) and the secondary temperature maximum associated with Pacific Summer Water (PSW). To identify the sensitivity of upper Arctic Ocean hydrography to physical processes and model configurations, a series of experiments are performed using the Regional Arctic System Model (RASM), a high-resolution, fully-coupled regional climate model. Analysis of RASM output suggests that surface momentum coupling (air-ice, ice-ocean, and air-ocean) and brine-rejection parameterization strongly influence thermohaline structure down to 700 m. The implementation of elastic anisotropic plastic sea ice rheology improves mixed layer properties, which is also sensitive to changes in numerical convective viscosity and diffusivity. Sea ice formation during model spin-up essentially destroys the initial

  2. Sensitivity analysis of the evaporation module of the E-DiGOR model

    OpenAIRE

    AYDIN, Mehmet; KEÇECİOĞLU, Suzan Filiz

    2010-01-01

    Sensitivity analysis of the soil-water-evaporation module of the E-DiGOR (Evaporation and Drainage investigations at Ground of Ordinary Rainfed-areas) model is presented. The model outputs were generated using measured climatic data and soil properties. The first-order sensitivity formulas were derived to compute relative sensitivity coefficients. A change in the net solar radiation significantly affected potential evaporation from bare soils estimated by the Penman-Monteith equation. The se...

  3. Prior sensitivity analysis in default Bayesian structural equation modeling

    NARCIS (Netherlands)

    van Erp, S.J.; Mulder, J.; Oberski, Daniel L.

    2018-01-01

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models while solving some of the issues often encountered in classical maximum likelihood (ML) estimation, such as nonconvergence and inadmissible solutions. An important

  4. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA

    2017-04-01

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  5. Sensitivity of fire behavior simulations to fuel model variations

    Science.gov (United States)

    Lucy A. Salazar

    1985-01-01

    Stylized fuel models, or numerical descriptions of fuel arrays, are used as inputs to fire behavior simulation models. These fuel models are often chosen on the basis of generalized fuel descriptions, which are related to field observations. Site-specific observations of fuels or fire behavior in the field are not readily available or necessary for most fire management...

  6. Detection of C',Cα correlations in proteins using a new time- and sensitivity-optimal experiment

    International Nuclear Information System (INIS)

    Lee, Donghan; Voegeli, Beat; Pervushin, Konstantin

    2005-01-01

    Sensitivity- and time-optimal experiment, called COCAINE (CO-CA In- and aNtiphase spectra with sensitivity Enhancement), is proposed to correlate chemical shifts of 13 C' and 13 C α spins in proteins. A comparison of the sensitivity and duration of the experiment with the corresponding theoretical unitary bounds shows that the COCAINE experiment achieves maximum possible transfer efficiency in the shortest possible time, and in this sense the sequence is optimal. Compared to the standard HSQC, the COCAINE experiment delivers a 2.7-fold gain in sensitivity. This newly proposed experiment can be used for assignment of backbone resonances in large deuterated proteins effectively bridging 13 C' and 13 C α resonances in adjacent amino acids. Due to the spin-state selection employed, the COCAINE experiment can also be used for efficient measurements of one-bond couplings (e.g. scalar and residual dipolar couplings) in any two-spin system (e.g. the N/H in the backbone of protein)

  7. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  8. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  9. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    Science.gov (United States)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  10. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  11. The Constructive Marginal of "Moby-Dick": Ishmael and the Developmental Model of Intercultural Sensitivity

    Science.gov (United States)

    Morgan, Jeff

    2011-01-01

    Cultural sensitivity theory is the study of how individuals relate to cultural difference. Using literature to help students prepare for study abroad, instructors could analyze character and trace behavior through a model of cultural sensitivity. Milton J. Bennett has developed such an instrument, The Developmental Model of Intercultural…

  12. The sensitivity of the climate response to the magnitude and location of freshwater forcing: last glacial maximum experiments

    Science.gov (United States)

    Otto-Bliesner, Bette L.; Brady, Esther C.

    2010-01-01

    Proxy records indicate that the locations and magnitudes of freshwater forcing to the Atlantic Ocean basin as iceberg discharges into the high-latitude North Atlantic, Laurentide meltwater input to the Gulf of Mexico, or meltwater diversion to the North Atlantic via the St. Lawrence River and other eastern outlets may have influenced the North Atlantic thermohaline circulation and global climate. We have performed Last Glacial Maximum (LGM) simulations with the NCAR Community Climate System Model (CCSM3) in which the magnitude of the freshwater forcing has been varied from 0.1 to 1 Sv and inserted either into the subpolar North Atlantic Ocean or the Gulf of Mexico. In these glacial freshening experiments, the less dense freshwater provides a lid on the ocean water below, suppressing ocean convection and interaction with the atmosphere above and reducing the Atlantic Meridional Overturning Circulation (AMOC). This is the case whether the freshwater is added directly to the area of convection south of Greenland or transported there by the subtropical and subpolar gyres when added to the Gulf of Mexico. The AMOC reduction is less for the smaller freshwater forcings, but is not linear with the size of the freshwater perturbation. The recovery of the AMOC from a "slow" state is ˜200 years for the 0.1 Sv experiment and ˜500 years for the 1 Sv experiment. For glacial climates, with large Northern Hemisphere ice sheets and reduced greenhouse gases, the cold subpolar North Atlantic is primed to respond rapidly and dramatically to freshwater that is either directly dumped into this region or after being advected from the Gulf of Mexico. Greenland temperatures cool by 6-8 °C in all the experiments, with little sensitivity to the magnitude, location or duration of the freshwater forcing, but exhibiting large seasonality. Sea ice is important for explaining the responses. The Northern Hemisphere high latitudes are slow to recover. Antarctica and the Southern Ocean show a

  13. Azimuthally Sensitive Hanbury Brown–Twiss Interferometry measured with the ALICE Experiment

    CERN Document Server

    Gramling, Johanna

    Bose–Einstein correlations of identical pions emitted in high-energy particle collisions provide information about the size of the source region in space-time. If analyzed via HBT Interferometry in several directions with respect to the reaction plane, the shape of the source can be extracted. Hence, HBT Interferometry provides an excellent tool to probe the characteristics of the quark-gluon plasma possibly created in high-energy heavy-ion collisions. This thesis introduces the main theoretical concepts of particle physics, the quark-gluon plasma and the technique of HBT interferometry. The ALICE experiment at the CERN Large Hadron Collider (LHC) is explained and the first azimuthally integrated results measured in Pb–Pb collisions at sqrt(s_NN) = 2.76TeV with ALICE are presented. A detailed two-track resolution study leading to a global pair cut for HBT analyses has been performed, and a framework for the event plane determination has been developed. The results from azimuthally sensitive HBT interferom...

  14. Atmospheric statistical dynamic models. Climate experiments: albedo experiments with a zonal atmospheric model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts

  15. Uncovering the influence of social skills and psychosociological factors on pain sensitivity using structural equation modeling.

    Science.gov (United States)

    Tanaka, Yoichi; Nishi, Yuki; Nishi, Yuki; Osumi, Michihiro; Morioka, Shu

    2017-01-01

    Pain is a subjective emotional experience that is influenced by psychosociological factors such as social skills, which are defined as problem-solving abilities in social interactions. This study aimed to reveal the relationships among pain, social skills, and other psychosociological factors by using structural equation modeling. A total of 101 healthy volunteers (41 men and 60 women; mean age: 36.6±12.7 years) participated in this study. To evoke participants' sense of inner pain, we showed them images of painful scenes on a PC screen and asked them to evaluate the pain intensity by using the visual analog scale (VAS). We examined the correlation between social skills and VAS, constructed a hypothetical model based on results from previous studies and the current correlational analysis results, and verified the model's fit using structural equation modeling. We found significant positive correlations between VAS and total social skills values, as well as between VAS and the "start of relationships" subscales. Structural equation modeling revealed that the values for "start of relationships" had a direct effect on VAS values (path coefficient =0.32, p social support. The results indicated that extroverted people are more sensitive to inner pain and tend to get more social support and maintain a better psychological condition.

  16. Adverse social experiences in adolescent rats result in enduring effects on social competence, pain sensitivity and endocannabinoid signaling

    Directory of Open Access Journals (Sweden)

    Peggy Schneider

    2016-10-01

    Full Text Available Social affiliation is essential for many species and gains significant importance during adolescence. Disturbances in social affiliation, in particular social rejection experiences during adolescence, affect an individual’s well-being and are involved in the emergence of psychiatric disorders. The underlying mechanisms are still unknown, partly because of a lack of valid animal models. By using a novel animal model for social peer-rejection, which compromises adolescent rats in their ability to appropriately engage in playful activities, here we report on persistent impairments in social behavior and dysregulations in the endocannabinoid system. From postnatal day (pd 21 to pd 50 adolescent female Wistar rats were either reared with same-strain partners (control or within a group of Fischer 344 rats (inadequate social rearing, ISR, previously shown to serve as inadequate play partners for the Wistar strain. Adult ISR animals showed pronounced deficits in social interaction, social memory, processing of socially transmitted information, and decreased pain sensitivity. Molecular analysis revealed increased CB1 receptor protein levels and CP55,940 stimulated 35SGTPγS binding activity specifically in the amygdala and thalamus in previously peer-rejected rats. Along with these changes, increased levels of the endocannabinoid anandamide and a corresponding decrease of its degrading enzyme fatty acid amide hydrolase were seen in the amygdala. Our data indicate lasting consequences in social behavior and pain sensitivity following peer-rejection in adolescent female rats. These behavioral impairments are accompanied by persistent alterations in CB1 receptor signaling. Finally, we provide a novel translational approach to characterize neurobiological processes underlying social peer-rejection in adolescence.

  17. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    Science.gov (United States)

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  18. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  19. An approach to measure parameter sensitivity in watershed hydrologic modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...

  20. Combining Two Methods of Global Sensitivity Analysis to Investigate MRSA Nasal Carriage Model.

    Science.gov (United States)

    Jarrett, Angela M; Cogan, N G; Hussaini, M Y

    2017-10-01

    We apply two different sensitivity techniques to a model of bacterial colonization of the anterior nares to better understand the dynamics of Staphylococcus aureus nasal carriage. Specifically, we use partial rank correlation coefficients to investigate sensitivity as a function of time and identify a reduced model with fewer than half of the parameters of the full model. The reduced model is used for the calculation of Sobol' indices to identify interacting parameters by their additional effects indices. Additionally, we found that the model captures an interesting characteristic of the biological phenomenon related to the initial population size of the infection; only two parameters had any significant additional effects, and these parameters have biological evidence suggesting they are connected but not yet completely understood. Sensitivity is often applied to elucidate model robustness, but we show that combining sensitivity measures can lead to synergistic insight into both model and biological structures.

  1. Sensitivity of the Gravity Recovery and Climate Experiment (GRACE) to the complexity of aquifer systems for monitoring of groundwater

    Science.gov (United States)

    Katpatal, Yashwant B.; Rishma, C.; Singh, Chandan K.

    2017-11-01

    The Gravity Recovery and Climate Experiment (GRACE) satellite mission is aimed at assessment of groundwater storage under different terrestrial conditions. The main objective of the presented study is to highlight the significance of aquifer complexity to improve the performance of GRACE in monitoring groundwater. Vidarbha region of Maharashtra, central India, was selected as the study area for analysis, since the region comprises a simple aquifer system in the western region and a complex aquifer system in the eastern region. Groundwater-level-trend analyses of the different aquifer systems and spatial and temporal variation of the terrestrial water storage anomaly were studied to understand the groundwater scenario. GRACE and its field application involve selecting four pixels from the GRACE output with different aquifer systems, where each GRACE pixel encompasses 50-90 monitoring wells. Groundwater storage anomalies (GWSA) are derived for each pixel for the period 2002 to 2015 using the Release 05 (RL05) monthly GRACE gravity models and the Global Land Data Assimilation System (GLDAS) land-surface models (GWSAGRACE) as well as the actual field data (GWSAActual). Correlation analysis between GWSAGRACE and GWSAActual was performed using linear regression. The Pearson and Spearman methods show that the performance of GRACE is good in the region with simple aquifers; however, performance is poorer in the region with multiple aquifer systems. The study highlights the importance of incorporating the sensitivity of GRACE in estimation of groundwater storage in complex aquifer systems in future studies.

  2. Sensitivity of the Gravity Recovery and Climate Experiment (GRACE) to the complexity of aquifer systems for monitoring of groundwater

    Science.gov (United States)

    Katpatal, Yashwant B.; Rishma, C.; Singh, Chandan K.

    2018-05-01

    The Gravity Recovery and Climate Experiment (GRACE) satellite mission is aimed at assessment of groundwater storage under different terrestrial conditions. The main objective of the presented study is to highlight the significance of aquifer complexity to improve the performance of GRACE in monitoring groundwater. Vidarbha region of Maharashtra, central India, was selected as the study area for analysis, since the region comprises a simple aquifer system in the western region and a complex aquifer system in the eastern region. Groundwater-level-trend analyses of the different aquifer systems and spatial and temporal variation of the terrestrial water storage anomaly were studied to understand the groundwater scenario. GRACE and its field application involve selecting four pixels from the GRACE output with different aquifer systems, where each GRACE pixel encompasses 50-90 monitoring wells. Groundwater storage anomalies (GWSA) are derived for each pixel for the period 2002 to 2015 using the Release 05 (RL05) monthly GRACE gravity models and the Global Land Data Assimilation System (GLDAS) land-surface models (GWSAGRACE) as well as the actual field data (GWSAActual). Correlation analysis between GWSAGRACE and GWSAActual was performed using linear regression. The Pearson and Spearman methods show that the performance of GRACE is good in the region with simple aquifers; however, performance is poorer in the region with multiple aquifer systems. The study highlights the importance of incorporating the sensitivity of GRACE in estimation of groundwater storage in complex aquifer systems in future studies.

  3. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  4. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  5. Hindcasting to measure ice sheet model sensitivity to initial states

    Directory of Open Access Journals (Sweden)

    A. Aschwanden

    2013-07-01

    Full Text Available Validation is a critical component of model development, yet notoriously challenging in ice sheet modeling. Here we evaluate how an ice sheet system model responds to a given forcing. We show that hindcasting, i.e. forcing a model with known or closely estimated inputs for past events to see how well the output matches observations, is a viable method of assessing model performance. By simulating the recent past of Greenland, and comparing to observations of ice thickness, ice discharge, surface speeds, mass loss and surface elevation changes for validation, we find that the short term model response is strongly influenced by the initial state. We show that the thermal and dynamical states (i.e. the distribution of internal energy and momentum can be misrepresented despite a good agreement with some observations, stressing the importance of using multiple observations. In particular we identify rates of change of spatially dense observations as preferred validation metrics. Hindcasting enables a qualitative assessment of model performance relative to observed rates of change. It thereby reduces the number of admissible initial states more rigorously than validation efforts that do not take advantage of observed rates of change.

  6. Explicit modelling of SOA formation from α-pinene photooxidation: sensitivity to vapour pressure estimation

    Directory of Open Access Journals (Sweden)

    R. Valorso

    2011-07-01

    Full Text Available The sensitivity of the formation of secondary organic aerosol (SOA to the estimated vapour pressures of the condensable oxidation products is explored. A highly detailed reaction scheme was generated for α-pinene photooxidation using the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A. Vapour pressures (Pvap were estimated with three commonly used structure activity relationships. The values of Pvap were compared for the set of secondary species generated by GECKO-A to describe α-pinene oxidation. Discrepancies in the predicted vapour pressures were found to increase with the number of functional groups borne by the species. For semi-volatile organic compounds (i.e. organic species of interest for SOA formation, differences in the predicted Pvap range between a factor of 5 to 200 on average. The simulated SOA concentrations were compared to SOA observations in the Caltech chamber during three experiments performed under a range of NOx conditions. While the model captures the qualitative features of SOA formation for the chamber experiments, SOA concentrations are systematically overestimated. For the conditions simulated, the modelled SOA speciation appears to be rather insensitive to the Pvap estimation method.

  7. Data on the experiments of temperature-sensitive hydrogels for pH-sensitive drug release and the characterizations of materials

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2018-04-01

    Full Text Available This article contains experimental data on the strain sweep, the calibration curve of drug (doxorubicin, DOX and the characterizations of materials. Data included are related to the research article “Injectable and body temperature sensitive hydrogels based on chitosan and hyaluronic acid for pH sensitive drug release” (Zhang et al., 2017 [1]. The strain sweep experiments were performed on a rotational rheometer. The calibration curves were obtained by analyzing the absorbance of DOX solutions on a UV–vis-NIR spectrometer. Molecular weight (Mw of the hyaluronic acid (HA and chitosan (CS were determined by gel permeation chromatography (GPC. The deacetylation degree of CS was measured by acid base titration.

  8. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer H

    2012-01-01

    visualization. Specifically we focus on the generation of summary maps of a nonlinear classifier, that reveal how the classifier works in different parts of the input domain. Each of the maps includes sign information, unlike earlier related methods. The sign information allows the researcher to assess in which...... direction the individual locations influence the classification. We illustrate the visualization procedure on a real data from a simple functional magnetic resonance imaging experiment....

  9. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    Energy Technology Data Exchange (ETDEWEB)

    Cassardo, C. [Alessandria, Univ. di Turin (Italy). Dipt. di Scienze e Tecnologie Avanzate; Carena, E.; Longhetto, A. [Turin Univ. (Italy). Dipt. di Fisica Generale `Amedeo Avogadro`

    1998-03-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type.

  10. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  11. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  12. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  13. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    effects and the one-at-a-time approach (O.A.T); and (ii), we applied Sobol's global sensitivity analysis method which is based on variance decompositions. Results illustrate that ψm (maximum sorption rate of mobile colloids), kdmc (solute desorption rate from mobile colloids), and Ks (saturated hydraulic conductivity) are the most sensitive parameters with respect to the contaminant travel time. The analyses indicate that this new module is able to simulate the colloid-facilitated contaminant transport. However, validations under laboratory conditions are needed to confirm the occurrence of the colloid transport phenomenon and to understand model prediction under non-saturated soil conditions. Future work will involve monitoring of the colloidal transport phenomenon through soil column experiments. The anticipated outcome will provide valuable information on the understanding of the dominant mechanisms responsible for colloidal transports, colloid-facilitated contaminant transport and, also, the colloid detachment/deposition processes impacts on soil hydraulic properties. References: Šimůnek, J., C. He, L. Pang, & S. A. Bradford, Colloid-Facilitated Solute Transport in Variably Saturated Porous Media: Numerical Model and Experimental Verification, Vadose Zone Journal, 2006, 5, 1035-1047 Šimůnek, J., M. Šejna, & M. Th. van Genuchten, The C-Ride Module for HYDRUS (2D/3D) Simulating Two-Dimensional Colloid-Facilitated Solute Transport in Variably-Saturated Porous Media, Version 1.0, PC Progress, Prague, Czech Republic, 45 pp., 2012.

  14. Process verification of a hydrological model using a temporal parameter sensitivity analysis

    OpenAIRE

    M. Pfannerstill; B. Guse; D. Reusser; N. Fohrer

    2015-01-01

    To ensure reliable results of hydrological models, it is essential that the models reproduce the hydrological process dynamics adequately. Information about simulated process dynamics is provided by looking at the temporal sensitivities of the corresponding model parameters. For this, the temporal dynamics of parameter sensitivity are analysed to identify the simulated hydrological processes. Based on these analyses it can be verified if the simulated hydrological processes ...

  15. Combined calibration and sensitivity analysis for a water quality model of the Biebrza River, Poland

    NARCIS (Netherlands)

    Perk, van der M.; Bierkens, M.F.P.

    1995-01-01

    A study was performed to quantify the error in results of a water quality model of the Biebrza River, Poland, due to uncertainties in calibrated model parameters. The procedure used in this study combines calibration and sensitivity analysis. Finally,the model was validated to test the model

  16. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  17. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  18. Evaluating the influence of selected parameters on sensitivity of a numerical model of solidification

    Directory of Open Access Journals (Sweden)

    N. Sczygiol

    2007-12-01

    Full Text Available Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM. The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods. Characteristic feature of the global methods is necessity to conduct a series of simulations applying the investigated model with appropriately chosen model parameters. The advantage of Morris method is possibility to reduce the number of necessary simulations. Results of the presented work allow to answer the question how generic sensitivity analysis results are, particularly if sensitivity analysis results depend only on model characteristics and not on things such as density of the finite element mesh or shape of the region. Results of this research allow to conclude that sensitivity analysis with use of Morris method depends only on characteristic of the investigated model.

  19. Evaluating Weather Research and Forecasting Model Sensitivity to Land and Soil Conditions Representative of Karst Landscapes

    Science.gov (United States)

    Johnson, Christopher M.; Fan, Xingang; Mahmood, Rezaul; Groves, Chris; Polk, Jason S.; Yan, Jun

    2017-10-01

    Due to their particular physiographic, geomorphic, soil cover, and complex surface-subsurface hydrologic conditions, karst regions produce distinct land-atmosphere interactions. It has been found that floods and droughts over karst regions can be more pronounced than those in non-karst regions following a given rainfall event. Five convective weather events are simulated using the Weather Research and Forecasting model to explore the potential impacts of land-surface conditions on weather simulations over karst regions. Since no existing weather or climate model has the ability to represent karst landscapes, simulation experiments in this exploratory study consist of a control (default land-cover/soil types) and three land-surface conditions, including barren ground, forest, and sandy soils over the karst areas, which mimic certain karst characteristics. Results from sensitivity experiments are compared with the control simulation, as well as with the National Centers for Environmental Prediction multi-sensor precipitation analysis Stage-IV data, and near-surface atmospheric observations. Mesoscale features of surface energy partition, surface water and energy exchange, the resulting surface-air temperature and humidity, and low-level instability and convective energy are analyzed to investigate the potential land-surface impact on weather over karst regions. We conclude that: (1) barren ground used over karst regions has a pronounced effect on the overall simulation of precipitation. Barren ground provides the overall lowest root-mean-square errors and bias scores in precipitation over the peak-rain periods. Contingency table-based equitable threat and frequency bias scores suggest that the barren and forest experiments are more successful in simulating light to moderate rainfall. Variables dependent on local surface conditions show stronger contrasts between karst and non-karst regions than variables dominated by large-scale synoptic systems; (2) significant

  20. Evaluating Weather Research and Forecasting Model Sensitivity to Land and Soil Conditions Representative of Karst Landscapes

    Science.gov (United States)

    Johnson, Christopher M.; Fan, Xingang; Mahmood, Rezaul; Groves, Chris; Polk, Jason S.; Yan, Jun

    2018-03-01

    Due to their particular physiographic, geomorphic, soil cover, and complex surface-subsurface hydrologic conditions, karst regions produce distinct land-atmosphere interactions. It has been found that floods and droughts over karst regions can be more pronounced than those in non-karst regions following a given rainfall event. Five convective weather events are simulated using the Weather Research and Forecasting model to explore the potential impacts of land-surface conditions on weather simulations over karst regions. Since no existing weather or climate model has the ability to represent karst landscapes, simulation experiments in this exploratory study consist of a control (default land-cover/soil types) and three land-surface conditions, including barren ground, forest, and sandy soils over the karst areas, which mimic certain karst characteristics. Results from sensitivity experiments are compared with the control simulation, as well as with the National Centers for Environmental Prediction multi-sensor precipitation analysis Stage-IV data, and near-surface atmospheric observations. Mesoscale features of surface energy partition, surface water and energy exchange, the resulting surface-air temperature and humidity, and low-level instability and convective energy are analyzed to investigate the potential land-surface impact on weather over karst regions. We conclude that: (1) barren ground used over karst regions has a pronounced effect on the overall simulation of precipitation. Barren ground provides the overall lowest root-mean-square errors and bias scores in precipitation over the peak-rain periods. Contingency table-based equitable threat and frequency bias scores suggest that the barren and forest experiments are more successful in simulating light to moderate rainfall. Variables dependent on local surface conditions show stronger contrasts between karst and non-karst regions than variables dominated by large-scale synoptic systems; (2) significant

  1. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: Experiments and computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Bøtter-Jensen, L.; Agersnap Larsen, N.

    1996-01-01

    As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes...

  2. Experience modulates both aromatase activity and the sensitivity of agonistic behaviour to testosterone in black-headed gulls

    NARCIS (Netherlands)

    Ros, Albert F. H.; Franco, Aldina M. A.; Groothuis, Ton G. G.

    2009-01-01

    In young black-headed gulls (Larus ridibundus), exposure to testosterone increases the sensitivity of agonistic behaviour to a subsequent exposure to this hormone. The aim of this paper is twofold: to analyze whether social experience, gained during testosterone exposure, mediates this increase in

  3. Reflexive Positioning in a Politically Sensitive Situation: Dealing with the Threats of Researching the West Bank Settler Experience

    Science.gov (United States)

    Possick, Chaya

    2009-01-01

    For the past 7 years, the author has conducted qualitative research projects revolving around the experiences of West Bank settlers. The political situation in Israel in general, and the West Bank in particular, has undergone rapid and dramatic political, military, and social changes during this period. In highly politically sensitive situations…

  4. Healthy volunteers can be phenotyped using cutaneous sensitization pain models

    DEFF Research Database (Denmark)

    Werner, Mads U; Petersen, Karin; Rowbotham, Michael C

    2013-01-01

    Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repe...

  5. Sensitivity Analysis in Structural Equation Models: Cases and Their Influence

    Science.gov (United States)

    Pek, Jolynn; MacCallum, Robert C.

    2011-01-01

    The detection of outliers and influential observations is routine practice in linear regression. Despite ongoing extensions and development of case diagnostics in structural equation models (SEM), their application has received limited attention and understanding in practice. The use of case diagnostics informs analysts of the uncertainty of model…

  6. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    Science.gov (United States)

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  7. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Vicksburg, MS: US Army Engineer Research and Development Center. An electronic copy of this CHETN is available from http://chl.erdc.usace.army.mil/chetn...Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area ( LCA ) Comprehensive

  8. A duopoly model with heterogeneous congestion-sensitive customers

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Timmer, Judith B.

    2003-01-01

    This paper analyzes a model with multiple firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards `congestion' (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the firms,

  9. A duopoly model with heterogeneous congestion-sensitive customers.

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Timmer, J.

    2007-01-01

    Abstract This paper analyzes a model with two firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards ‘congestion’ (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the

  10. A duopoly model with heterogeneous congestion-sensitive customers

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Timmer, Judith B.

    This paper analyzes a model with two firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards ‘congestion’ (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the firms, and

  11. Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling

    Science.gov (United States)

    2008-01-01

    Morgan Kaufmann, 1988. [24] J. Pearl, Causality: Models, Reasoning, and Inference, Cambridge University Press, 2000. [25] J. Piaget , Piaget’s theory ...Gopnik, C. Glymour, D. M. Sobel, L. E. Schulz, T. Kushnir, D. Danks, A theory of causal learning in children: Causal maps and Bayes nets, Psychological

  12. Modelling flow through unsaturated zones: Sensitivity to unsaturated ...

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    water flow through unsaturated zones and study the effect of unsaturated soil parameters on water movement during different processes such as gravity drainage and infiltration. 2. Modelling Richards equation for vertical unsaturated flow. For one-dimensional vertical flow in unsaturated soil, the pressure-head based ...

  13. Computer models experiences in radiological safety

    International Nuclear Information System (INIS)

    Ferreri, J.C.; Grandi, G.M.; Ventura, M.A.; Doval, A.S.

    1989-01-01

    A review in the formulation and use of numerical methods in fluid dynamics and heat and mass transfer in nuclear safety is presented. A wide range of applications is covered, namely: nuclear reactor's thermohydraulics, natural circulation in closed loops, experiments for the validation of numerical methods, thermohydraulics of fractured-porous media and radionuclide migration. The results of the experience accumulated is a research line dealing at the present with moving grids in computational fluid dynamics and the use of artificial intelligence techniques. As a consequence some recent experience in the development of expert systems and the considerations that should be taken into account for its use in radiological safety is also reviewed. (author)

  14. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  15. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  16. Sensitivity of mineral dissolution rates to physical weathering : A modeling approach

    Science.gov (United States)

    Opolot, Emmanuel; Finke, Peter

    2015-04-01

    There is continued interest on accurate estimation of natural weathering rates owing to their importance in soil formation, nutrient cycling, estimation of acidification in soils, rivers and lakes, and in understanding the role of silicate weathering in carbon sequestration. At the same time a challenge does exist to reconcile discrepancies between laboratory-determined weathering rates and natural weathering rates. Studies have consistently reported laboratory rates to be in orders of magnitude faster than the natural weathering rates (White, 2009). These discrepancies have mainly been attributed to (i) changes in fluid composition (ii) changes in primary mineral surfaces (reactive sites) and (iii) the formation of secondary phases; that could slow natural weathering rates. It is indeed difficult to measure the interactive effect of the intrinsic factors (e.g. mineral composition, surface area) and extrinsic factors (e.g. solution composition, climate, bioturbation) occurring at the natural setting, in the laboratory experiments. A modeling approach could be useful in this case. A number of geochemical models (e.g. PHREEQC, EQ3/EQ6) already exist and are capable of estimating mineral dissolution / precipitation rates as a function of time and mineral mass. However most of these approaches assume a constant surface area in a given volume of water (White, 2009). This assumption may become invalid especially at long time scales. One of the widely used weathering models is the PROFILE model (Sverdrup and Warfvinge, 1993). The PROFILE model takes into account the mineral composition, solution composition and surface area in determining dissolution / precipitation rates. However there is less coupling with other processes (e.g. physical weathering, clay migration, bioturbation) which could directly or indirectly influence dissolution / precipitation rates. We propose in this study a coupling between chemical weathering mechanism (defined as a function of reactive area

  17. Sensitivity of a global ice-ocean model to the Bering Strait throughflow

    Energy Technology Data Exchange (ETDEWEB)

    Goosse, H. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium). Inst. d`Astronomie et de Geophysique G. Lemaitre; Campin, J.M. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium). Inst. d`Astronomie et de Geophysique G. Lemaitre; Fichefet, T. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium). Inst. d`Astronomie et de Geophysique G. Lemaitre; Deleersnijder, E. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium). Inst. d`Astronomie et de Geophysique G. Lemaitre

    1997-06-01

    To understand the influence of the Bering Strait on the World Ocean`s circulation, a model sensitivity analysis is conducted. The numerical experiments are carried out with a global, coupled ice-ocean model. The water transport through the Bering Strait is parametrized according to the geostrophic control theory. The model is driven by surface fluxes derived from bulk formulae assuming a prescribed atmospheric seasonal cycle. In addition, a weak restoring to observed surface salinities is applied to compensate for the global imbalance of the imposed surface freshwater fluxes. The freshwater flux from the North Pacific to the North Atlantic associated with the Bering Strait throughflow seems to be an important element in the freshwater budget of the Greenland and Norwegian seas and of the Atlantic. This flux induces a freshening of the North Atlantic surface waters, which reduces the convective activity and leads to a noticeable (6%) weakening of the thermohaline conveyor belt. It is argued that the contrasting results obtained by Reason and Power are due to the type of surface boundary conditions they used. (orig.). With 8 figs.

  18. Advanced postbuckling and imperfection sensitivity of the elastic-plastic Shanley-Hutchinson model column

    DEFF Research Database (Denmark)

    Christensen, Claus Dencker; Byskov, Esben

    2008-01-01

    The postbuckling behavior and imperfection sensitivity of the Shanley-Hutchinson plastic model column introduced by Hutchinson in 1973 are examined. The study covers the initial, buckled state and the advanced postbuckling regime of the geometrically perfect realization as well as its sensitivity...

  19. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    Science.gov (United States)

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Enhancing collaborative intrusion detection networks against insider attacks using supervised intrusion sensitivity-based trust management model

    DEFF Research Database (Denmark)

    Li, Wenjuan; Meng, Weizhi; Kwok, Lam-For

    2017-01-01

    To defend against complex attacks, collaborative intrusion detection networks (CIDNs) have been developed to enhance the detection accuracy, which enable an IDS to collect information and learn experience from others. However, this kind of networks is vulnerable to malicious nodes which...... are utilized by insider attacks (e.g., betrayal attacks). In our previous research, we developed a notion of intrusion sensitivity and identified that it can help improve the detection of insider attacks, whereas it is still a challenge for these nodes to automatically assign the values. In this article, we...... of intrusion sensitivity based on expert knowledge. In the evaluation, we compare the performance of three different supervised classifiers in assigning sensitivity values and investigate our trust model under different attack scenarios and in a real wireless sensor network. Experimental results indicate...

  1. Modelling of the simple pendulum Experiment

    Directory of Open Access Journals (Sweden)

    Palka L.

    2016-01-01

    Full Text Available Abstract - work focuses on the design of the simulation embedded in remote experiment “Simple pendulum” built on the Internet School Experimental System (ISES. This platform is intended for wide educational purposes at schools and universities in order to provide the suitable measuring environment for students using conventional computing resources Informatics.

  2. Some Experiences with Numerical Modelling of Overflows

    DEFF Research Database (Denmark)

    Larsen, Torben; Nielsen, L.; Jensen, B.

    2007-01-01

    across the edge of the overflow. To ensure critical flow across the edge, the upstream flow must be subcritical whereas the downstream flow is either supercritical or a free jet. Experimentally overflows are well studied. Based on laboratory experiments and Froude number scaling, numerous accurate...

  3. Sleep fragmentation exacerbates mechanical hypersensitivity and alters subsequent sleep-wake behavior in a mouse model of musculoskeletal sensitization.

    Science.gov (United States)

    Sutton, Blair C; Opp, Mark R

    2014-03-01

    Sleep deprivation, or sleep disruption, enhances pain in human subjects. Chronic musculoskeletal pain is prevalent in our society, and constitutes a tremendous public health burden. Although preclinical models of neuropathic and inflammatory pain demonstrate effects on sleep, few studies focus on musculoskeletal pain. We reported elsewhere in this issue of SLEEP that musculoskeletal sensitization alters sleep of mice. In this study we hypothesize that sleep fragmentation during the development of musculoskeletal sensitization will exacerbate subsequent pain responses and alter sleep-wake behavior of mice. This is a preclinical study using C57BL/6J mice to determine the effect on behavioral outcomes of sleep fragmentation combined with musculoskeletal sensitization. Musculoskeletal sensitization, a model of chronic muscle pain, was induced using two unilateral injections of acidified saline (pH 4.0) into the gastrocnemius muscle, spaced 5 days apart. Musculoskeletal sensitization manifests as mechanical hypersensitivity determined by von Frey filament testing at the hindpaws. Sleep fragmentation took place during the consecutive 12-h light periods of the 5 days between intramuscular injections. Electroencephalogram (EEG) and body temperature were recorded from some mice at baseline and for 3 weeks after musculoskeletal sensitization. Mechanical hypersensitivity was determined at preinjection baseline and on days 1, 3, 7, 14, and 21 after sensitization. Two additional experiments were conducted to determine the independent effects of sleep fragmentation or musculoskeletal sensitization on mechanical hypersensitivity. Five days of sleep fragmentation alone did not induce mechanical hypersensitivity, whereas sleep fragmentation combined with musculoskeletal sensitization resulted in prolonged and exacerbated mechanical hypersensitivity. Sleep fragmentation combined with musculoskeletal sensitization had an effect on subsequent sleep of mice as demonstrated by increased

  4. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Science.gov (United States)

    Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda

    2016-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102

  5. Mathematical Modeling: Are Prior Experiences Important?

    Science.gov (United States)

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…

  6. Towards Generic Models of Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Shaker, Mohammad; Abou-Zleikha, Mohamed

    2015-01-01

    Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context...

  7. Design of experiments an introduction based on linear models

    CERN Document Server

    Morris, Max D

    2011-01-01

    IntroductionExample: rainfall and grassland Basic elements of an experimentExperiments and experiment-like studies Models and data analysisLinear Statistical ModelsLinear vector spaces Basic linear model The hat matrix, least-squares estimates, and design information matrixThe partitioned linear model The reduced normal equations Linear and quadratic forms Estimation and information Hypothesis testing and informationBlocking and informationCompletely Randomized DesignsIntroductionModels Matrix formulation Influence of design on estimation Influence of design on hypothesis testingRandomized Com

  8. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR)

    OpenAIRE

    Robles Martínez, Ángel; Ruano García, María Victoria; Ribes Bertomeu, José; SECO TORRECILLAS, AURORA; FERRER, J.

    2014-01-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic...

  9. Sensitivity to plant modelling uncertainties in optimal feedback control of sound radiation from a panel

    DEFF Research Database (Denmark)

    Mørkholt, Jakob

    1997-01-01

    Optimal feedback control of broadband sound radiation from a rectangular baffled panel has been investigated through computer simulations. Special emphasis has been put on the sensitivity of the optimal feedback control to uncertainties in the modelling of the system under control.A model...... in terms of a set of radiation filters modelling the radiation dynamics.Linear quadratic feedback control applied to the panel in order to minimise the radiated sound power has then been simulated. The sensitivity of the model based controller to modelling uncertainties when using feedback from actual...

  10. The Psychological Essence of the Child Prodigy Phenomenon: Sensitive Periods and Cognitive Experience.

    Science.gov (United States)

    Shavinina, Larisa V.

    1999-01-01

    Examination of the child prodigy phenomenon suggests it is a result of extremely accelerated mental development during sensitive periods that leads to the rapid growth of a child's cognitive resources and their construction into specific exceptional achievements. (Author/DB)

  11. Sensitivity of the High Altitude Water Cherenkov Experiment to observe Gamma-Ray Bursts

    Science.gov (United States)

    González, M. M.

    Ground based telescopes have marginally observed very high energy emission (>100GeV) from gamma-ray bursts(GRB). For instance, Milagrito observed GRB970417a with a significance of 3.7 sigmas over the background. Milagro have not yet observed TeV emission from a GRB with its triggered and untriggered searches or GeV emission with a triggered search using its scalers. These results suggest the need of new observatories with higher sensitivity to transient sources. The HAWC (High Altitute Water Cherenkov) observatory is proposed as a combination of the Milagro tecnology with a very high altitude (>4000m over see level) site. The expected HAWC sensitivity for GRBs is at least >10 times the Milagro sensitivity. In this work HAWC sensitivity for GRBs is discussed for different detector configurations such as altitude, distance between PMTs, depth under water of PMTs, number of PMTs required for a trigger, etc.

  12. Modeling the Formation of Language: Embodied Experiments

    Science.gov (United States)

    Steels, Luc

    This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied interactions between autonomous agents. It gives some details of the Grounded Naming Game, which focuses on the formation of a system of proper names, the Spatial Language Game, which focuses on the formation of a lexicon for expressing spatial relations as well as perspective reversal, and an Event Description Game, which concerns the expression of the role of participants in events through an emergent case grammar. For each experiment, details are provided how the symbolic system emerges, how the interaction is grounded in the world through the embodiment of the agent and its sensori-motor processing, and how concepts are formed in tight interaction with the emerging language.

  13. Basal plasma insulin and homeostasis model assessment (HOMA) are indicators of insulin sensitivity in cats.

    Science.gov (United States)

    Appleton, D J; Rand, J S; Sunvold, G D

    2005-06-01

    The objective of this study was to compare simpler indices of insulin sensitivity with the minimal model-derived insulin sensitivity index to identify a simple and reliable alternative method for assessing insulin sensitivity in cats. In addition, we aimed to determine whether this simpler measure or measures showed consistency of association across differing body weights and glucose tolerance levels. Data from glucose tolerance and insulin sensitivity tests performed in 32 cats with varying body weights (underweight to obese), including seven cats with impaired glucose tolerance, were used to assess the relationship between Bergman's minimal model-derived insulin sensitivity index (S(I)), and various simpler measures of insulin sensitivity. The most useful overall predictors of insulin sensitivity were basal plasma insulin concentrations and the homeostasis model assessment (HOMA), which is the product of basal glucose and insulin concentrations divided by 22.5. It is concluded that measurement of plasma insulin concentrations in cats with food withheld for 24 h, in conjunction with HOMA, could be used in clinical research projects and by practicing veterinarians to screen for reduced insulin sensitivity in cats. Such cats may be at increased risk of developing impaired glucose tolerance and type 2 diabetes mellitus. Early detection of these cats would enable preventative intervention programs such as weight reduction, increased physical activity and dietary modifications to be instigated.

  14. Stability and Sensitive Analysis of a Model with Delay Quorum Sensing

    Directory of Open Access Journals (Sweden)

    Zhonghua Zhang

    2015-01-01

    Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.

  15. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  16. Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.

    Science.gov (United States)

    Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J

    2014-01-01

    A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.

  17. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    Science.gov (United States)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  18. Analytical Modeling Tool for Design of Hydrocarbon Sensitive Optical Fibers

    Directory of Open Access Journals (Sweden)

    Khalil Al Handawi

    2017-09-01

    Full Text Available Pipelines are the main transportation means for oil and gas products across large distances. Due to the severe conditions they operate in, they are regularly inspected using conventional Pipeline Inspection Gages (PIGs for corrosion damage. The motivation for researching a real-time distributed monitoring solution arose to mitigate costs and provide a proactive indication of potential failures. Fiber optic sensors with polymer claddings provide a means of detecting contact with hydrocarbons. By coating the fibers with a layer of metal similar in composition to that of the parent pipeline, corrosion of this coating may be detected when the polymer cladding underneath is exposed to the surrounding hydrocarbons contained within the pipeline. A Refractive Index (RI change occurs in the polymer cladding causing a loss in intensity of a traveling light pulse due to a reduction in the fiber’s modal capacity. Intensity losses may be detected using Optical Time Domain Reflectometry (OTDR while pinpointing the spatial location of the contact via time delay calculations of the back-scattered pulses. This work presents a theoretical model for the above sensing solution to provide a design tool for the fiber optic cable in the context of hydrocarbon sensing following corrosion of an external metal coating. Results are verified against the experimental data published in the literature.

  19. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  20. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nicholas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Mueller, Donald E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Powers, Jeffrey J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-08-31

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.

  1. Neutron transport model for standard calculation experiment

    International Nuclear Information System (INIS)

    Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.

    1989-01-01

    The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs

  2. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  3. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  4. Cohesive mixed mode fracture modelling and experiments

    DEFF Research Database (Denmark)

    Walter, Rasmus; Olesen, John Forbes

    2008-01-01

    A nonlinear mixed mode model originally developed by Wernersson [Wernersson H. Fracture characterization of wood adhesive joints. Report TVSM-1006, Lund University, Division of Structural Mechanics; 1994], based on nonlinear fracture mechanics, is discussed and applied to model interfacial cracking....... An experimental set-up for the assessment of mixed mode interfacial fracture properties is presented, applying a bi-material specimen, half steel and half concrete, with an inclined interface and under uniaxial load. Loading the inclined steel–concrete interface under different angles produces load–crack opening...... curves, which may be interpreted using the nonlinear mixed mode model. The interpretation of test results is carried out in a two step inverse analysis applying numerical optimization tools. It is demonstrated how to perform the inverse analysis, which couples the assumed individual experimental load...

  5. Silicon Carbide Derived Carbons: Experiments and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kertesz, Miklos [Georgetown University, Washington DC 20057

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  6. Sensitivity experiments on the response of Vb cyclones to sea surface temperature and soil moisture changes

    Directory of Open Access Journals (Sweden)

    M. Messmer

    2017-07-01

    Full Text Available Extratropical cyclones of type Vb, which develop over the western Mediterranean and move northeastward, are major natural hazards that are responsible for heavy precipitation over central Europe. To gain further understanding in the governing processes of these Vb cyclones, the study explores the role of soil moisture and sea surface temperature (SST and their contribution to the atmospheric moisture content. Thereby, recent Vb events identified in the ERA-Interim reanalysis are dynamically downscaled with the Weather Research and Forecasting (WRF model. Results indicate that a mean high-impact summer Vb event is mostly sensitive to an increase in the Mediterranean SSTs and rather insensitive to Atlantic SSTs and soil moisture changes. Hence, an increase of +5 K in Mediterranean SSTs leads to an average increase of 24 % in precipitation over central Europe. This increase in precipitation is mainly induced by larger mean upward moisture flux over the Mediterranean with increasing Mediterranean SSTs. This further invokes an increase in latent energy release, which leads to an increase in atmospheric instability, i.e. in convective available potential energy. Both the increased availability of atmospheric moisture and the increased instability of the atmosphere, which is able to remove extra moisture from the atmosphere due to convective processes, are responsible for the strong increase in precipitation over the entire region influenced by Vb events. Precipitation patterns further indicate that a strong increase in precipitation is found at the eastern coast of the Adriatic Sea for increased Mediterranean SSTs. This premature loss in atmospheric moisture leads to a significant decrease in atmospheric moisture transport to central Europe and the northeastern flanks of the Alpine mountain chain. This leads to a reduction in precipitation in this high-impact region of the Vb event for an increase in Mediterranean SSTs of +5 K. Furthermore, the

  7. Sensitivity of Coastal Flood Risk Assessments to Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Bas van de Sande

    2012-07-01

    Full Text Available Most coastal flood risk studies make use of a Digital Elevation Model (DEM in addition to a projected flood water level in order to estimate the flood inundation and associated damages to property and livelihoods. The resolution and accuracy of a DEM are critical in a flood risk assessment, as land elevation largely determines whether a location will be flooded or will remain dry during a flood event. Especially in low lying deltaic areas, the land elevation variation is usually in the order of only a few decimeters, and an offset of various decimeters in the elevation data has a significant impact on the accuracy of the risk assessment. Publicly available DEMs are often used in studies for coastal flood risk assessments. The accuracy of these datasets is relatively low, in the order of meters, and is especially low in comparison to the level of accuracy required for a flood risk assessment in a deltaic area. For a coastal zone area in Nigeria (Lagos State an accurate LiDAR DEM dataset was adopted as ground truth concerning terrain elevation. In the case study, the LiDAR DEM was compared to various publicly available DEMs. The coastal flood risk assessment using various publicly available DEMs was compared to a flood risk assessment using LiDAR DEMs. It can be concluded that the publicly available DEMs do not meet the accuracy requirement of coastal flood risk assessments, especially in coastal and deltaic areas. For this particular case study, the publically available DEMs highly overestimated the land elevation Z-values and thereby underestimated the coastal flood risk for the Lagos State area. The findings are of interest when selecting data sets for coastal flood risk assessments in low-lying deltaic areas.

  8. Modelling small scale infiltration experiments into bore cores of crystalline rock and break-through curves

    International Nuclear Information System (INIS)

    Hadermann, J.; Jakob, A.

    1987-04-01

    Uranium infiltration experiments for small samples of crystalline rock have been used to model radionuclide transport. The theory, taking into account advection and dispersion in water conducting zones, matrix diffusion out of these, and sorption, contains four independent parameters. It turns out, that the physical variables extracted from those of the best-fit parameters are consistent with values from literature and independent measurements. Moreover, the model results seem to differentiate between various geometries for the water conducting zones. Alpha-autoradiographies corroborate this result. A sensitivity analysis allows for a judgement on parameter dependences. Finally some proposals for further experiments are made. (author)

  9. Evaporation experiments and modelling for glass melts

    NARCIS (Netherlands)

    Limpt, J.A.C. van; Beerkens, R.G.C.

    2007-01-01

    A laboratory test facility has been developed to measure evaporation rates of different volatile components from commercial and model glass compositions. In the set-up the furnace atmosphere, temperature level, gas velocity and batch composition are controlled. Evaporation rates have been measured

  10. [Experience of implementing a primary attention model].

    Science.gov (United States)

    Ruiz-Rodríguez, Myriam; Acosta-Ramírez, Naydú; Rodríguez Villamizar, Laura A; Uribe, Luz M; León-Franco, Martha

    2011-12-01

    Identifying barriers and dynamic factors in setting up a primary health care (PHC) model in the Santander department during the last decade. This was a qualitative study, focusing on pluralism and triangulating sources and actors, with a critical analysis of limits and judgments values (boundary critique). Philosophical/conceptual and operational management problems were found from the emergent categories related to appropriating PHC attributes. The theoretical model design was in fact not developed in practice. The PHC strategy is selective and state-led (at department level), focusing on rural interventions developed by nursing assistants and orientated towards fulfilling public health goals in the first healthcare level. Difficulties at national, state and local level were identified which could be useful in other national and international contexts. Structural healthcare system market barriers were the most important constraints since the model operates through the contractual logic of institutional segmentation and operational fragmentation. Human resource management focusing on skills, suitable local health management and systematic evaluation studies would thus be suggested as essential operational elements for facing the aforementioned problems and encourage an integral PHC model in Colombia.

  11. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  12. Balancing sensitivity and specificity: sixteen year's of experience from the mammography screening programme in Copenhagen, Denmark

    DEFF Research Database (Denmark)

    Utzon-Frank, Nicolai; Vejborg, Ilse; von Euler-Chelpin, My Catarina

    2011-01-01

    To report on sensitivity and specificity from 7 invitation rounds of the organised, population-based mammography screening programme started in Copenhagen, Denmark, in 1991, and offered biennially to women aged 50-69. Changes over time were related to organisation and technology.......To report on sensitivity and specificity from 7 invitation rounds of the organised, population-based mammography screening programme started in Copenhagen, Denmark, in 1991, and offered biennially to women aged 50-69. Changes over time were related to organisation and technology....

  13. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  14. Thermal Properties of Metallic Nanowires: Modeling & Experiment

    Science.gov (United States)

    Stojanovic, Nenad; Berg, Jordan; Maithripala, Sanjeeva; Holtz, Mark

    2009-10-01

    Effects such as surface and grain boundary scattering significantly influence electrical and thermal properties of nanoscale materials with important practical implications for current and future electronics and photonics. Conventional wisdom for metals holds that thermal transport is predominantly by electrons and transport by phonons is negligible. This assumption is used to justify the use of the Wiedemann-Franz law to infer thermal conductivity based on measurements of electrical resistivity. Recently experiments suggest a breakdown of the Wiedemann-Franz law at the nanoscale. This talk will examine the assumption that thermal transport by phonons can be neglected. The electrical resistivities and thermal conductivities of aluminum nanowires of various sizes are directly measured. These values are used in conjunction with the Boltzmann transport equation to conclude that the Wiedemann-Franz law describes the electronic component of thermal conductivity, but that the phonon term must also be considered. A novel experimental device is described for the direct thermal conductivity measurements.

  15. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  16. Micro- and nanoflows modeling and experiments

    CERN Document Server

    Rudyak, Valery Ya; Maslov, Anatoly A; Minakov, Andrey V; Mironov, Sergey G

    2018-01-01

    This book describes physical, mathematical and experimental methods to model flows in micro- and nanofluidic devices. It takes in consideration flows in channels with a characteristic size between several hundreds of micrometers to several nanometers. Methods based on solving kinetic equations, coupled kinetic-hydrodynamic description, and molecular dynamics method are used. Based on detailed measurements of pressure distributions along the straight and bent microchannels, the hydraulic resistance coefficients are refined. Flows of disperse fluids (including disperse nanofluids) are considered in detail. Results of hydrodynamic modeling of the simplest micromixers are reported. Mixing of fluids in a Y-type and T-type micromixers is considered. The authors present a systematic study of jet flows, jets structure and laminar-turbulent transition. The influence of sound on the microjet structure is considered. New phenomena associated with turbulization and relaminarization of the mixing layer of microjets are di...

  17. Turbulent Boundary Layers - Experiments, Theory and Modelling

    Science.gov (United States)

    1980-01-01

    1979 "Calcul des transferts thermiques entre film chaud et substrat par un modele ä deux dimensions", Int. J. Heat Mass Transfer ^2, p. 111-119...surface heat transfer a to the surface shear Cu/ ; here, corrections are compulsory because the wall shear,stress fluctuations are large (the r.m.s...technique is the mass transfer analogue of the constant temperature anemometer when the chemical reaction at the electrode embedded in the wall is

  18. Previous Experience a Model of Practice UNAE

    OpenAIRE

    Ruiz, Ormary Barberi; Pesántez Palacios, María Dolores

    2017-01-01

    The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP) of the National University of Education (UNAE) of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials), pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subj...

  19. Decay Kinetics of UV-Sensitive Materials: An Introductory Chemistry Experiment

    Science.gov (United States)

    Via, Garrhett; Williams, Chelsey; Dudek, Raymond; Dudek, John

    2015-01-01

    First-order kinetic decay rates can be obtained by measuring the time-dependent reflection spectra of ultraviolet-sensitive objects as they returned from their excited, colored state back to the ground, colorless state. In this paper, a procedure is described which provides an innovative and unique twist on standard, undergraduate, kinetics…

  20. A Sensitive and Robust Enzyme Kinetic Experiment Using Microplates and Fluorogenic Ester Substrates

    Science.gov (United States)

    Johnson, R. Jeremy; Hoops, Geoffrey C.; Savas, Christopher J.; Kartje, Zachary; Lavis, Luke D.

    2015-01-01

    Enzyme kinetics measurements are a standard component of undergraduate biochemistry laboratories. The combination of serine hydrolases and fluorogenic enzyme substrates provides a rapid, sensitive, and general method for measuring enzyme kinetics in an undergraduate biochemistry laboratory. In this method, the kinetic activity of multiple protein…

  1. Experiences and perspectives in using telematic prevention on sensitive health issues.

    Science.gov (United States)

    Peltoniemi, Teuvo

    2004-01-01

    The new information and communication technologies, telematics - such as the Internet, telephone services and videoconferencing - are simultaneously both an instrument and a symbol - a sign of progress - but also a potential addiction problem. Sensitive topics - like substances or mental health - bring out all these characteristics of telematics. Therefore the computer world, substances and addictions are closely connected.

  2. Trends in Microbiological and Antibiotic Sensitivity Patterns in Infectious Keratitis: 10-Year Experience in Mexico City.

    Science.gov (United States)

    Hernandez-Camarena, Julio C; Graue-Hernandez, Enrique O; Ortiz-Casas, Mariana; Ramirez-Miranda, Arturo; Navas, Alejandro; Pedro-Aguilar, Lucero; Lopez-Espinosa, Nadia L; Gaona-Juarez, Carolina; Bautista-Hernandez, Luis A; Bautista-de Lucio, Victor M

    2015-07-01

    To report the distribution and trends in microbiological and antibiotic sensitivity patterns of infectious keratitis in a 10-year period at a reference center in Mexico City. In this retrospective observational case series, samples were obtained from corneas with a diagnosis of infectious keratitis from January 2002 to December 2011 at the Institute of Ophthalmology "Conde de Valenciana" in Mexico City. Results of cultures, stains, and specific sensitivity/resistance antibiograms for each microorganism were analyzed. A total of 1638 consecutive corneal scrapings were analyzed. Pathogen was recovered in 616 samples (38%), with bacterial keratitis accounting for 544 of the positive cultures (88%). A nonsignificant increasing trend in gram-negative isolates (P = 0.11) was observed. The most commonly isolated pathogen was Staphylococcus epidermidis, and the most common gram-negative isolated species was Pseudomonas aeruginosa. Methicillin-resistant Staphylococcus aureus (MRSA) was present in 45% of the S. aureus isolates; meanwhile, 53.7% coagulase-negative Staphylococcus isolates were methicillin resistant (MRCNS). Pseudomonas aeruginosa resistance to ceftazidime increased from 15% in the first period to 74% for the last 5 years of the study (P = 0.01). The overall sensitivity for vancomycin of MRSA was 87.5%, whereas 99.6% of the MRCNS were sensitive. There was a nonsignificant increase in the recovered gram-positive and gram-negative microorganisms over time. We observed an increased resistance to methicillin in almost half of the MRSA and MRCNS isolates.

  3. Applications of one-dimensional position-sensitive detectors for neutron diffraction experiments on powders and liquids

    International Nuclear Information System (INIS)

    Riekel, C.

    1983-01-01

    The applications of one-dimensional position sensitive detectors (PSDs) are reviewed. The detectors used are multiwire detectors based on the principle of gas-filled proportional counters. The uses include the neutron diffraction from powders and liquids in the study of chemical reactions and phase transitions. However, the angular range and wire separation are insufficient for many experiments. In particular the data acquisition and processing are inadequate for real time experiments with tsub(s) values of seconds or less. (tsub(s) - measuring time per spectrum). From the results obtained it should be possible to optimize the construction of a new 160 0 PSD. (U.K.)

  4. Models for Risk Aggregation and Sensitivity Analysis: An Application to Bank Economic Capital

    Directory of Open Access Journals (Sweden)

    Hulusi Inanoglu

    2009-12-01

    Full Text Available A challenge in enterprise risk measurement for diversified financial institutions is developing a coherent approach to aggregating different risk types. This has been motivated by rapid financial innovation, developments in supervisory standards (Basel 2 and recent financial turmoil. The main risks faced - market, credit and operational – have distinct distributional properties, and historically have been modeled in differing frameworks. We contribute to the modeling effort by providing tools and insights to practitioners and regulators. First, we extend the scope of the analysis to liquidity and interest rate risk, having Basel Pillar II of Basel implications. Second, we utilize data from major banking institutions’ loss experience from supervisory call reports, which allows us to explore the impact of business mix and inter-risk correlations on total risk. Third, we estimate and compare alternative established frameworks for risk aggregation (including copula models on the same data-sets across banks, comparing absolute total risk measures (Value-at-Risk – VaR and proportional diversification benefits-PDB, goodness-of-fit (GOF of the model as data as well as the variability of the VaR estimate with respect to sampling error in parameter. This benchmarking and sensitivity analysis suggests that practitioners consider implementing a simple non-parametric methodology (empirical copula simulation- ECS in order to quantify integrated risk, in that it is found to be more conservatism and stable than the other models. We observe that ECS produces 20% to 30% higher VaR relative to the standard Gaussian copula simulation (GCS, while the variance-covariance approximation (VCA is much lower. ECS yields the highest PDBs than other methodologies (127% to 243%, while Archimadean Gumbel copula simulation (AGCS is the lowest (10-21%. Across the five largest banks we fail to find the effect of business mix to exert a directionally consistent impact on

  5. Integrated modeling of tokamak experiments with OMFIT

    International Nuclear Information System (INIS)

    Meneghini, Orso; Lao, Lang

    2013-01-01

    One Modeling Framework for Integrated Tasks (OMFIT) is a framework that allows data to be easily exchanged among different codes by providing a unifying data structure. The main idea at the base of OMFIT is to treat files, data and scripts as a uniform collection of objects organized into a tree structure, which provides a consistent way to access and manipulate such collection of heterogeneous objects, independent of their origin. Within the OMFIT tree, data can be copied/referred from one node to another and tasks can call each other allowing for complex compound task to be built. A top-level Graphical User Interface (GUI) allowing users to manage tree objects, carry out simulations and analyze the data either interactively or in batch. OMFIT supports many scientific data formats and when a file is loaded into the framework, its data populates the tree structure, automatically endowing it with many potential uses. Furthermore, seamless integration with experimental management systems allows direct manipulation of their data. In OMFIT modeling tasks are organized into modules, which can be easily combined to create arbitrarily-large multi-physics simulations. Modules inter-dependencies are seamlessly defined by variables referencing tree locations among them. Creation of new modules and customization of existing ones is encouraged by graphical tools for their management and an online repository. High level Application Programmer Interfaces (APIs) enable users to execute their codes on remote servers and creation application-specific GUIs. Finally, within OMFIT it is possible to visualize experimental and modeling data for both quick analysis and publication purposes. Examples of application to the DIII-D tokamak are presented. (author)

  6. Historical and idealized climate model experiments: an intercomparison of Earth system models of intermediate complexity

    Directory of Open Access Journals (Sweden)

    M. Eby

    2013-05-01

    Full Text Available Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there

  7. Enhanced P-Sensitive K-Anonymity Models for Privacy Preserving Data Publishing

    OpenAIRE

    Xiaoxun Sun; Hua Wang; Jiuyong Li; Traian Marius Truta

    2008-01-01

    Publishing data for analysis from a micro data table containing sensitive attributes, while maintaining individual privacy, is a problem of increasing significance today. The k-anonymity model was proposed for privacy preserving data publication. While focusing on identity disclosure, k-anonymity model fails to protect attribute disclosure to some extent. Many efforts are made to enhance the k-anonymity model recently. In this paper, we propose two new privacy protection models called (p, a)-...

  8. Portfolio Sensitivity Model for Analyzing Credit Risk Caused by Structural and Macroeconomic Changes

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2008-12-01

    Full Text Available This paper proposes a new model for portfolio sensitivity analysis. The model is suitable for decision support in financial institutions, specifically for portfolio planning and portfolio management. The basic advantage of the model is the ability to create simulations for credit risk predictions in cases when we virtually change portfolio structure and/or macroeconomic factors. The model takes a holistic approach to portfolio management consolidating all organizational segments in the process such as marketing, retail and risk.

  9. Computer experiments with a coarse-grid hydrodynamic climate model

    International Nuclear Information System (INIS)

    Stenchikov, G.L.

    1990-01-01

    A climate model is developed on the basis of the two-level Mintz-Arakawa general circulation model of the atmosphere and a bulk model of the upper layer of the ocean. A detailed model of the spectral transport of shortwave and longwave radiation is used to investigate the radiative effects of greenhouse gases. The radiative fluxes are calculated at the boundaries of five layers, each with a pressure thickness of about 200 mb. The results of the climate sensitivity calculations for mean-annual and perpetual seasonal regimes are discussed. The CCAS (Computer Center of the Academy of Sciences) climate model is used to investigate the climatic effects of anthropogenic changes of the optical properties of the atmosphere due to increasing CO 2 content and aerosol pollution, and to calculate the sensitivity to changes of land surface albedo and humidity

  10. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  11. Sensitivity Analysis of CLIMEX Parameters in Modeling Potential Distribution of Phoenix dactylifera L.

    Science.gov (United States)

    Shabani, Farzin; Kumar, Lalit

    2014-01-01

    Using CLIMEX and the Taguchi Method, a process-based niche model was developed to estimate potential distributions of Phoenix dactylifera L. (date palm), an economically important crop in many counties. Development of the model was based on both its native and invasive distribution and validation was carried out in terms of its extensive distribution in Iran. To identify model parameters having greatest influence on distribution of date palm, a sensitivity analysis was carried out. Changes in suitability were established by mapping of regions where the estimated distribution changed with parameter alterations. This facilitated the assessment of certain areas in Iran where parameter modifications impacted the most, particularly in relation to suitable and highly suitable locations. Parameter sensitivities were also evaluated by the calculation of area changes within the suitable and highly suitable categories. The low temperature limit (DV2), high temperature limit (DV3), upper optimal temperature (SM2) and high soil moisture limit (SM3) had the greatest impact on sensitivity, while other parameters showed relatively less sensitivity or were insensitive to change. For an accurate fit in species distribution models, highly sensitive parameters require more extensive research and data collection methods. Results of this study demonstrate a more cost effective method for developing date palm distribution models, an integral element in species management, and may prove useful for streamlining requirements for data collection in potential distribution modeling for other species as well. PMID:24722140

  12. Sensitivity analysis of CLIMEX parameters in modeling potential distribution of Phoenix dactylifera L.

    Directory of Open Access Journals (Sweden)

    Farzin Shabani

    Full Text Available Using CLIMEX and the Taguchi Method, a process-based niche model was developed to estimate potential distributions of Phoenix dactylifera L. (date palm, an economically important crop in many counties. Development of the model was based on both its native and invasive distribution and validation was carried out in terms of its extensive distribution in Iran. To identify model parameters having greatest influence on distribution of date palm, a sensitivity analysis was carried out. Changes in suitability were established by mapping of regions where the estimated distribution changed with parameter alterations. This facilitated the assessment of certain areas in Iran where parameter modifications impacted the most, particularly in relation to suitable and highly suitable locations. Parameter sensitivities were also evaluated by the calculation of area changes within the suitable and highly suitable categories. The low temperature limit (DV2, high temperature limit (DV3, upper optimal temperature (SM2 and high soil moisture limit (SM3 had the greatest impact on sensitivity, while other parameters showed relatively less sensitivity or were insensitive to change. For an accurate fit in species distribution models, highly sensitive parameters require more extensive research and data collection methods. Results of this study demonstrate a more cost effective method for developing date palm distribution models, an integral element in species management, and may prove useful for streamlining requirements for data collection in potential distribution modeling for other species as well.

  13. Indian Consortia Models: FORSA Libraries' Experiences

    Science.gov (United States)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  14. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  15. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  16. Design and application of a fiber Bragg grating strain sensor with enhanced sensitivity in the small-scale dam model

    Science.gov (United States)

    Ren, Liang; Chen, Jianyun; Li, Hong-Nan; Song, Gangbing; Ji, Xueheng

    2009-03-01

    Accurate measurement of strain variation and effective prediction of failure within models have been major objectives for strain sensors in dam model tests. In this paper, a fiber Bragg grating (FBG) strain sensor with enhanced strain sensitivity that is packaged by two gripper tubes is presented and applied in the seismic tests of a small-scale dam model. This paper discusses the principle of enhanced sensitivity of the FBG strain sensor. Calibration experiments and reliability tests were conducted to evaluate the sensor's strain transferring characteristics on plates of different material. This paper also investigates the applicability of the FBG strain sensors in seismic tests of a dam model by conducting a comparison between the test measurements of FBG sensors and analytical predictions, monitoring the failure progress and predicting the cracking inside the dam model. Results of the dam model tests prove that the FBG strain sensor has the advantages of small size, high precision, and embeddability. It has a promising potential in the cracking and failure monitoring and identification of the dam model.

  17. Hydrodynamics of Explosion Experiments and Models

    CERN Document Server

    Kedrinskii, Valery K

    2005-01-01

    Hydronamics of Explosion presents the research results for the problems of underwater explosions and contains a detailed analysis of the structure and the parameters of the wave fields generated by explosions of cord and spiral charges, a description of the formation mechanisms for a wide range of cumulative flows at underwater explosions near the free surface, and the relevant mathematical models. Shock-wave transformation in bubbly liquids, shock-wave amplification due to collision and focusing, and the formation of bubble detonation waves in reactive bubbly liquids are studied in detail. Particular emphasis is placed on the investigation of wave processes in cavitating liquids, which incorporates the concepts of the strength of real liquids containing natural microinhomogeneities, the relaxation of tensile stress, and the cavitation fracture of a liquid as the inversion of its two-phase state under impulsive (explosive) loading. The problems are classed among essentially nonlinear processes that occur unde...

  18. "ABC's Earthquake" (Experiments and models in seismology)

    Science.gov (United States)

    Almeida, Ana

    2017-04-01

    Ana Almeida, Portugal Almeida, Ana Escola Básica e Secundária Dr. Vieira de Carvalho Moreira da Maia, Portugal The purpose of this presentation, in poster format, is to disclose an activity which was planned and made by me, in a school on the north of Portugal, using a kit of materials simple and easy to use - the sismo-box. The activity "ABC's Earthquake" was developed under the discipline of Natural Sciences, with students from 7th grade, geosciences teachers and other areas. The possibility of work with the sismo-box was seen as an exciting and promising opportunity to promote science, seismology more specifically, to do science, when using the existing models in the box and with them implement the scientific method, to work and consolidate content and skills in the area of Natural Sciences, to have a time of sharing these materials with classmates, and also with other teachers from the different areas. Throughout the development of the activity, either with students or teachers, it was possible to see the admiration by the models presented in the earthquake-box, as well as, the interest and the enthusiasm in wanting to move and understand what the results after the proposed procedure in the script. With this activity, we managed to promote: - educational success in this subject; a "school culture" with active participation, with quality, rules, discipline and citizenship values; fully integration of students with special educational needs; strengthen the performance of the school as a cultural, informational and formation institution; provide activities to date and innovative; foment knowledge "to be, being and doing" and contribute to a moment of joy and discovery.Learn by doing!

  19. Modelling of fertilizer drying in a rotary dryer: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    M. G. Silva

    2012-06-01

    Full Text Available This study analyzed the influence of the following parameters: overall volumetric heat transfer coefficient, coefficient of heat loss, drying rate, specific heat of the solid and specific heat of dry air on the prediction of a model for the fertilizer drying in rotary dryers. The method of parametric sensitivity using an experimental design was employed in this study. All parameters studied significantly affected the responses of the drying model. In general, the model showed greater sensitivity to the parameters drying rate and overall volumetric heat transfer coefficient.

  20. Model building experiences using Garp3: problems, patterns and debugging

    NARCIS (Netherlands)

    Liem, J.; Linnebank, F.E.; Bredeweg, B.; Žabkar, J.; Bratko, I.

    2009-01-01

    Capturing conceptual knowledge in QR models is becoming of interest to a larger audience of domain experts. Consequently, we have been training several groups to effectively create QR models during the last few years. In this paper we describe our teaching experiences, the issues the modellers

  1. Modeling a production scale milk drying process: parameter estimation, uncertainty and sensitivity analysis

    DEFF Research Database (Denmark)

    Ferrari, A.; Gutierrez, S.; Sin, Gürkan

    2016-01-01

    A steady state model for a production scale milk drying process was built to help process understanding and optimization studies. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a comprehensive statistical analysis for quality assurance using...... sensitivity analysis of inputs/parameters, and uncertainty analysis to estimate confidence intervals on parameters and model predictions (error propagation). Variance based sensitivity analysis (Sobol's method) was used to quantify the influence of inputs on the final powder moisture as the model output...... at chamber inlet air (variation > 100%). The sensitivity analysis results suggest exploring improvements in the current control (Proportional Integral Derivative) for moisture content at concentrate chamber feed in order to reduce the output variance. It is also confirmed that humidity control at chamber...

  2. An equivalent circuit approach to the modelling of the dynamics of dye sensitized solar cells

    DEFF Research Database (Denmark)

    Bay, L.; West, K.

    2005-01-01

    A model that can be used to interpret the response of a dye-sensitized photo electrode to intensity-modulated light (intensity modulated voltage spectroscopy, IMVS and intensity modulated photo-current spectroscopy, IMPS) is presented. The model is based on an equivalent circuit approach involvin...

  3. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    NARCIS (Netherlands)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; Asseng, Senthold; Baranowski, Piotr; Basso, Bruno; Bodin, Per; Buis, Samuel; Cammarano, Davide; Deligios, Paola; Destain, Marie France; Dumont, Benjamin; Ewert, Frank; Ferrise, Roberto; François, Louis; Gaiser, Thomas; Hlavinka, Petr; Jacquemin, Ingrid; Kersebaum, Kurt Christian; Kollas, Chris; Krzyszczak, Jaromir; Lorite, Ignacio J.; Minet, Julien; Minguez, M.I.; Montesino, Manuel; Moriondo, Marco; Müller, Christoph; Nendel, Claas; Öztürk, Isik; Perego, Alessia; Rodríguez, Alfredo; Ruane, Alex C.; Ruget, Françoise; Sanna, Mattia; Semenov, Mikhail A.; Slawinski, Cezary; Stratonovitch, Pierre; Supit, Iwan; Waha, Katharina; Wang, Enli; Wu, Lianhai; Zhao, Zhigan; Rötter, Reimund P.

    2018-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in

  4. Modelled climate sensitivity of the mass balance of Morteratschgletscher and its dependence on albedo parameterization

    NARCIS (Netherlands)

    Klok, E.J.; Oerlemans, J.

    2004-01-01

    This paper presents a study of the climate sensitivity of the mass balance of Morteratschgletscher in Switzerland, estimated from a two-dimensional mass balance model. Since the albedo scheme chosen is often the largest error source in mass balance models, we investigated the impact of using

  5. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    Science.gov (United States)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  6. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    Czech Academy of Sciences Publication Activity Database

    Fronzek, S.; Pirttioja, N. K.; Carter, T. R.; Bindi, M.; Hoffmann, H.; Palosuo, T.; Ruiz-Ramos, M.; Tao, F.; Trnka, Miroslav; Acutis, M.; Asseng, S.; Baranowski, P.; Basso, B.; Bodin, P.; Buis, S.; Cammarano, D.; Deligios, P.; Destain, M. F.; Dumont, B.; Ewert, F.; Ferrise, R.; Francois, L.; Gaiser, T.; Hlavinka, Petr; Jacquemin, I.; Kersebaum, K. C.; Kollas, C.; Krzyszczak, J.; Lorite, I. J.; Minet, J.; Ines Minguez, M.; Montesino, M.; Moriondo, M.; Mueller, C.; Nendel, C.; Öztürk, I.; Perego, A.; Rodriguez, A.; Ruane, A. C.; Ruget, F.; Sanna, M.; Semenov, M. A.; Slawinski, C.; Stratonovitch, P.; Supit, I.; Waha, K.; Wang, E.; Wu, L.; Zhao, Z.; Rötter, R.

    2018-01-01

    Roč. 159, jan (2018), s. 209-224 ISSN 0308-521X Keywords : climate-change * crop models * probabilistic assessment * simulating impacts * british catchments * uncertainty * europe * productivity * calibration * adaptation * Classification * Climate change * Crop model * Ensemble * Sensitivity analysis * Wheat Impact factor: 2.571, year: 2016

  7. Sensitivity analysis of a simple linear model of a savanna ecosystem at Nyslvley

    CSIR Research Space (South Africa)

    Getz, WA

    1975-12-01

    Full Text Available The construction of a linear compartmental model of the savanna ecosystem at Nylsvley is discussed. Using crude estimates for the standing crop of the compartments and intercompartmental flow rates the sensitivity of the model to changes in its...

  8. Quantification of remodeling parameter sensitivity - assessed by a computer simulation model

    DEFF Research Database (Denmark)

    Thomsen, J.S.; Mosekilde, Li.; Mosekilde, Erik

    1996-01-01

    We have used a computer simulation model to evaluate the effect of several bone remodeling parameters on vertebral cancellus bone. The menopause was chosen as the base case scenario, and the sensitivity of the model to the following parameters was investigated: activation frequency, formation bal....... However, the formation balance was responsible for the greater part of total mass loss....

  9. Gray's Reinforcement Sensitivity Model and Child Psychopathology: Laboratory and Questionnaire Assessment of the BAS and BIS

    Science.gov (United States)

    Colder, Craig R.; O'Connor, Roisin M.

    2004-01-01

    The Behavioral Approach System (BAS) and Behavioral Inhibition System (BIS) are widely studied components of Gray's sensitivity to reinforcement model. There is growing interest in integrating the BAS and BIS into models of risk for psychopathology, however, few measures assess BAS and BIS functioning in children. We adapted a questionnaire…

  10. Climate of the Last Glacial Maximum: sensitivity studies and model-data comparison with the LOVECLIM coupled model

    Directory of Open Access Journals (Sweden)

    D. M. Roche

    2007-01-01

    Full Text Available The Last Glacial Maximum climate is one of the classical benchmarks used both to test the ability of coupled models to simulate climates different from that of the present-day and to better understand the possible range of mechanisms that could be involved in future climate change. It also bears the advantage of being one of the most well documented periods with respect to palaeoclimatic records, allowing a thorough data-model comparison. We present here an ensemble of Last Glacial Maximum climate simulations obtained with the Earth System model LOVECLIM, including coupled dynamic atmosphere, ocean and vegetation components. The climate obtained using standard parameter values is then compared to available proxy data for the surface ocean, vegetation, oceanic circulation and atmospheric conditions. Interestingly, the oceanic circulation obtained resembles that of the present-day, but with increased overturning rates. As this result is in contradiction with the current palaeoceanographic view, we ran a range of sensitivity experiments to explore the response of the model and the possibilities for other oceanic circulation states. After a critical review of our LGM state with respect to available proxy data, we conclude that the oceanic circulation obtained is not inconsistent with ocean circulation proxy data, although the water characteristics (temperature, salinity are not in full agreement with water mass proxy data. The consistency of the simulated state is further reinforced by the fact that the mean surface climate obtained is shown to be generally in agreement with the most recent reconstructions of vegetation and sea surface temperatures, even at regional scales.

  11. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    Science.gov (United States)

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    Science.gov (United States)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  13. Isoprene emissions modelling for West Africa: MEGAN model evaluation and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Ferreira

    2010-09-01

    Full Text Available Isoprene emissions are the largest source of reactive carbon to the atmosphere, with the tropics being a major source region. These natural emissions are expected to change with changing climate and human impact on land use. As part of the African Monsoon Multidisciplinary Analyses (AMMA project the Model of Emissions of Gases and Aerosols from Nature (MEGAN has been used to estimate the spatial and temporal distribution of isoprene emissions over the West African region. During the AMMA field campaign, carried out in July and August 2006, isoprene mixing ratios were measured on board the FAAM BAe-146 aircraft. These data have been used to make a qualitative evaluation of the model performance.

    MEGAN was firstly applied to a large area covering much of West Africa from the Gulf of Guinea in the south to the desert in the north and was able to capture the large scale spatial distribution of isoprene emissions as inferred from the observed isoprene mixing ratios. In particular the model captures the transition from the forested area in the south to the bare soils in the north, but some discrepancies have been identified over the bare soil, mainly due to the emission factors used. Sensitivity analyses were performed to assess the model response to changes in driving parameters, namely Leaf Area Index (LAI, Emission Factors (EF, temperature and solar radiation.

    A high resolution simulation was made of a limited area south of Niamey, Niger, where the higher concentrations of isoprene were observed. This is used to evaluate the model's ability to simulate smaller scale spatial features and to examine the influence of the driving parameters on an hourly basis through a case study of a flight on 17 August 2006.

    This study highlights the complex interactions between land surface processes and the meteorological dynamics and chemical composition of the PBL. This has implications for quantifying the impact of biogenic emissions

  14. Exploring sensitivity of a multistate occupancy model to inform management decisions

    Science.gov (United States)

    Green, A.W.; Bailey, L.L.; Nichols, J.D.

    2011-01-01

    Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate

  15. Thermal performance sensitivity studies in support of material modeling for extended storage of used nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Cuta, Judith M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fort, James A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Adkins, Harold E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-08-15

    The work reported here is an investigation of the sensitivity of component temperatures of a storage system, including fuel cladding temperatures, in response to age-related changes that could degrade the design-basis thermal behavior of the system. The purpose of these sensitivity studies is to provide a realistic example of how changes in the physical properties or configuration of the storage system components can affect temperatures and temperature distributions. The magnitudes of these sensitivities can provide guidance for identifying appropriate modeling assumptions for thermal evaluations extending long term storage out beyond 50, 100, 200, and 300 years.

  16. Angular sensitivity of modeled scientific silicon charge-coupled devices to initial electron direction

    Energy Technology Data Exchange (ETDEWEB)

    Plimley, Brian, E-mail: brian.plimley@gmail.com [Nuclear Engineering Department, University of California, Berkeley, CA (United States); Coffer, Amy; Zhang, Yigong [Nuclear Engineering Department, University of California, Berkeley, CA (United States); Vetter, Kai [Nuclear Engineering Department, University of California, Berkeley, CA (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2016-08-11

    Previously, scientific silicon charge-coupled devices (CCDs) with 10.5-μm pixel pitch and a thick (650 μm), fully depleted bulk have been used to measure gamma-ray-induced fast electrons and demonstrate electron track Compton imaging. A model of the response of this CCD was also developed and benchmarked to experiment using Monte Carlo electron tracks. We now examine the trade-off in pixel pitch and electronic noise. We extend our CCD response model to different pixel pitch and readout noise per pixel, including pixel pitch of 2.5 μm, 5 μm, 10.5 μm, 20 μm, and 40 μm, and readout noise from 0 eV/pixel to 2 keV/pixel for 10.5 μm pixel pitch. The CCD images generated by this model using simulated electron tracks are processed by our trajectory reconstruction algorithm. The performance of the reconstruction algorithm defines the expected angular sensitivity as a function of electron energy, CCD pixel pitch, and readout noise per pixel. Results show that our existing pixel pitch of 10.5 μm is near optimal for our approach, because smaller pixels add little new information but are subject to greater statistical noise. In addition, we measured the readout noise per pixel for two different device temperatures in order to estimate the effect of temperature on the reconstruction algorithm performance, although the readout is not optimized for higher temperatures. The noise in our device at 240 K increases the FWHM of angular measurement error by no more than a factor of 2, from 26° to 49° FWHM for electrons between 425 keV and 480 keV. Therefore, a CCD could be used for electron-track-based imaging in a Peltier-cooled device.

  17. Structural development and web service based sensitivity analysis of the Biome-BGC MuSo model

    Science.gov (United States)

    Hidy, Dóra; Balogh, János; Churkina, Galina; Haszpra, László; Horváth, Ferenc; Ittzés, Péter; Ittzés, Dóra; Ma, Shaoxiu; Nagy, Zoltán; Pintér, Krisztina; Barcza, Zoltán

    2014-05-01

    -BGC with multi-soil layer). Within the frame of the BioVeL project (http://www.biovel.eu) an open source and domain independent scientific workflow management system (http://www.taverna.org.uk) are used to support 'in silico' experimentation and easy applicability of different models including Biome-BGC MuSo. Workflows can be built upon functionally linked sets of web services like retrieval of meteorological dataset and other parameters; preparation of single run or spatial run model simulation; desk top grid technology based Monte Carlo experiment with parallel processing; model sensitivity analysis, etc. The newly developed, Monte Carlo experiment based sensitivity analysis is described in this study and results are presented about differences in the sensitivity of the original and the developed Biome-BGC model.

  18. The use of regression for assessing a seasonal forecast model experiment

    Science.gov (United States)

    Benestad, Rasmus E.; Senan, Retish; Orsolini, Yvan

    2016-11-01

    We show how factorial regression can be used to analyse numerical model experiments, testing the effect of different model settings. We analysed results from a coupled atmosphere-ocean model to explore how the different choices in the experimental set-up influence the seasonal predictions. These choices included a representation of the sea ice and the height of top of the atmosphere, and the results suggested that the simulated monthly mean air temperatures poleward of the mid-latitudes were highly sensitivity to the specification of the top of the atmosphere, interpreted as the presence or absence of a stratosphere. The seasonal forecasts for the mid-latitudes to high latitudes were also sensitive to whether the model set-up included a dynamic or non-dynamic sea-ice representation, although this effect was somewhat less important than the role of the stratosphere. The air temperature in the tropics was insensitive to these choices.

  19. CXTFIT/Excel-A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    Science.gov (United States)

    Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.

    2010-09-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.

  20. Development of salt-sensitive hypertension in a sensory denervated model: the underlying mechanisms

    Directory of Open Access Journals (Sweden)

    Donna H Wang

    2001-03-01

    hypertension in CAP-HS rats (by the end of the experiment, CON-HS, 122±3; CAP-NS, 118±10; CAP-HS, 169±9; CAP-HS-CAN, 129±2, p<0.05. Thus, both circulating and tissue RAS in sensory-denervated rats are abnormally regulated in response to a high-salt intake, which may contribute to increased salt sensitivity and account for the effectiveness of candesartan in lowering BP in this model.

  1. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  2. A reactive transport model for mercury fate in contaminated soil--sensitivity analysis.

    Science.gov (United States)

    Leterme, Bertrand; Jacques, Diederik

    2015-11-01

    We present a sensitivity analysis of a reactive transport model of mercury (Hg) fate in contaminated soil systems. The one-dimensional model, presented in Leterme et al. (2014), couples water flow in variably saturated conditions with Hg physico-chemical reactions. The sensitivity of Hg leaching and volatilisation to parameter uncertainty is examined using the elementary effect method. A test case is built using a hypothetical 1-m depth sandy soil and a 50-year time series of daily precipitation and evapotranspiration. Hg anthropogenic contamination is simulated in the topsoil by separately considering three different sources: cinnabar, non-aqueous phase liquid and aqueous mercuric chloride. The model sensitivity to a set of 13 input parameters is assessed, using three different model outputs (volatilized Hg, leached Hg, Hg still present in the contaminated soil horizon). Results show that dissolved organic matter (DOM) concentration in soil solution and the binding constant to DOM thiol groups are critical parameters, as well as parameters related to Hg sorption to humic and fulvic acids in solid organic matter. Initial Hg concentration is also identified as a sensitive parameter. The sensitivity analysis also brings out non-monotonic model behaviour for certain parameters.

  3. Optimal experiment design for model selection in biochemical networks.

    Science.gov (United States)

    Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W

    2014-02-20

    Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.

  4. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Modeled sensitivity of Lake Michigan productivity and zooplankton to changing nutrient concentrations and quagga mussels

    Science.gov (United States)

    Pilcher, Darren J.; McKinley, Galen A.; Kralj, James; Bootsma, Harvey A.; Reavie, Euan D.

    2017-08-01

    The recent decline in Lake Michigan productivity is often attributed to filter feeding by invasive quagga mussels, but some studies also implicate reductions in lakewide nutrient concentrations. We use a 3-D coupled hydrodynamic-biogeochemical model to evaluate the effect of changing nutrient concentrations and quagga mussel filtering on phytoplankton production and phytoplankton and zooplankton biomass. Sensitivity experiments are used to assess the net effect of each change separately and in unison. Quagga mussels are found to have the greatest impact during periods of isothermal mixing, while nutrients have the greatest impact during thermal stratification. Quagga mussels also act to enhance spatial heterogeneity, particularly between nearshore-offshore regions. This effect produces a reversal in the gradient of nearshore-offshore productivity: from relatively greater nearshore productivity in the prequagga lake to relatively lesser nearshore productivity after quaggas. The combined impact of both processes drives substantial reductions in phytoplankton and zooplankton biomass, as well as significant modifications to the seasonality of surface water pCO2, particularly in nearshore regions where mussel grazing continues year-round. These results support growing concern that considerable losses of phytoplankton and zooplankton will yield concurrent losses at higher trophic levels. Comparisons to observed productivity suggest that both quagga mussel filtration and lower lakewide total phosphorus are necessary to accurately simulate recent changes in primary productivity in Lake Michigan.

  6. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  7. Sensitivity analysis of specific activity model parameters for environmental transport of 3H and dose assessment

    International Nuclear Information System (INIS)

    Rout, S.; Mishra, D.G.; Ravi, P.M.; Tripathi, R.M.

    2016-01-01

    Tritium is one of the radionuclides likely to get released to the environment from Pressurized Heavy Water Reactors. Environmental models are extensively used to quantify the complex environmental transport processes of radionuclides and also to assess the impact to the environment. Model parameters exerting the significant influence on model results are identified through a sensitivity analysis (SA). SA is the study of how the variation (uncertainty) in the output of a mathematical model can be apportioned, qualitatively or quantitatively, to different sources of variation in the input parameters. This study was designed to identify the sensitive model parameters of specific activity model (TRS 1616, IAEA) for environmental transfer of 3 H following release to air and then to vegetation and animal products. Model includes parameters such as air to soil transfer factor (CRs), Tissue Free Water 3 H to Organically Bound 3 H ratio (Rp), Relative humidity (RH), WCP (fractional water content) and WEQp (water equivalent factor) any change in these parameters leads to change in 3 H level in vegetation and animal products consequently change in dose due to ingestion. All these parameters are function of climate and/or plant which change with time, space and species. Estimation of these parameters at every time is a time consuming and also required sophisticated instrumentation. Therefore it is necessary to identify the sensitive parameters and freeze the values of least sensitive parameters at constant values for more accurate estimation of 3 H dose in short time for routine assessment

  8. Epidermal adrenergic signaling contributes to inflammation and pain sensitization in a rat model of complex regional pain syndrome.

    Science.gov (United States)

    Li, Wenwu; Shi, Xiaoyou; Wang, Liping; Guo, Tianzhi; Wei, Tzuping; Cheng, Kejun; Rice, Kenner C; Kingery, Wade S; Clark, J David

    2013-08-01

    In many patients, the sympathetic nervous system supports pain and other features of complex regional pain syndrome (CRPS). Accumulating evidence suggests that interleukin (IL)-6 also plays a role in CRPS, and that catecholamines stimulate production of IL-6 in several tissues. We hypothesized that norepinephrine acting through specific adrenergic receptors expressed on keratinocytes stimulates the production of IL-6 and leads to nociceptive sensitization in a rat tibial fracture/cast model of CRPS. Our approach involved catecholamine depletion using 6-hydroxydopamine or, alternatively, guanethidine, to explore sympathetic contributions. Both agents substantially reduced nociceptive sensitization and selectively reduced the production of IL-6 in skin. Antagonism of IL-6 signaling using TB-2-081 also reduced sensitization in this model. Experiments using a rat keratinocyte cell line demonstrated relatively high levels of β2-adrenergic receptor (β2-AR) expression. Stimulation of this receptor greatly enhanced IL-6 expression when compared to the expression of IL-1β, tumor necrosis factor (TNF)-α, or nerve growth factor. Stimulation of the cells also promoted phosphorylation of the mitogen-activated protein kinases P38, extracellular signal-regulated kinase, and c-Jun amino-terminal kinase. Based on these in vitro results, we returned to animal testing and observed that the selective β2-AR antagonist butoxamine reduced nociceptive sensitization in the CRPS model, and that local injection of the selective β2-AR agonist terbutaline resulted in mechanical allodynia and the production of IL-6 in the cells of the skin. No increases in IL-1β, TNF-α, or nerve growth factor levels were seen, however. These data suggest that in CRPS, norepinephrine released from sympathetic nerve terminals stimulates β2-ARs expressed on epidermal keratinocytes, resulting in local IL-6 production, and ultimately, pain sensitization. Published by Elsevier B.V.

  9. Investigating the sensitivity of hurricane intensity and trajectory to sea surface temperatures using the regional model WRF

    Directory of Open Access Journals (Sweden)

    Cevahir Kilic

    2013-12-01

    Full Text Available The influence of sea surface temperature (SST anomalies on the hurricane characteristics are investigated in a set of sensitivity experiments employing the Weather Research and Forecasting (WRF model. The idealised experiments are performed for the case of Hurricane Katrina in 2005. The first set of sensitivity experiments with basin-wide changes of the SST magnitude shows that the intensity goes along with changes in the SST, i.e., an increase in SST leads to an intensification of Katrina. Additionally, the trajectory is shifted to the west (east, with increasing (decreasing SSTs. The main reason is a strengthening of the background flow. The second set of experiments investigates the influence of Loop Current eddies idealised by localised SST anomalies. The intensity of Hurricane Katrina is enhanced with increasing SSTs close to the core of a tropical cyclone. Negative nearby SST anomalies reduce the intensity. The trajectory only changes if positive SST anomalies are located west or north of the hurricane centre. In this case the hurricane is attracted by the SST anomaly which causes an additional moisture source and increased vertical winds.

  10. Model and Computing Experiment for Research and Aerosols Usage Management

    Directory of Open Access Journals (Sweden)

    Daler K. Sharipov

    2012-09-01

    Full Text Available The article deals with a math model for research and management of aerosols released into the atmosphere as well as numerical algorithm used as hardware and software systems for conducting computing experiment.

  11. A user experience model for tangible interfaces for children

    NARCIS (Netherlands)

    Reidsma, Dennis; van Dijk, Elisabeth M.A.G.; van der Sluis, Frans; Volpe, G; Camurri, A.; Perloy, L.M.; Nijholt, Antinus

    2015-01-01

    Tangible user interfaces allow children to take advantage of their experience in the real world when interacting with digital information. In this paper we describe a model for tangible user interfaces specifically for children that focuses mainly on the user experience during interaction and on how

  12. Design of laser-generated shockwave experiments. An approach using analytic models

    International Nuclear Information System (INIS)

    Lee, Y.T.; Trainor, R.J.

    1980-01-01

    Two of the target-physics phenomena which must be understood before a clean experiment can be confidently performed are preheating due to suprathermal electrons and shock decay due to a shock-rarefaction interaction. Simple analytic models are described for these two processes and the predictions of these models are compared with those of the LASNEX fluid physics code. We have approached this work not with the view of surpassing or even approaching the reliability of the code calculations, but rather with the aim of providing simple models which may be used for quick parameter-sensitivity evaluations, while providing physical insight into the problems

  13. Teaching examples for the design of experiments: geographical sensitivity and the self-fulfilling prophecy.

    Science.gov (United States)

    Lendrem, Dennis W; Lendrem, B Clare; Rowland-Jones, Ruth; D'Agostino, Fabio; Linsley, Matt; Owen, Martin R; Isaacs, John D

    2016-01-01

    Many scientists believe that small experiments, guided by scientific intuition, are simpler and more efficient than design of experiments. This belief is strong and persists even in the face of data demonstrating that it is clearly wrong. In this paper, we present two powerful teaching examples illustrating the dangers of small experiments guided by scientific intuition. We describe two, simple, two-dimensional spaces. These two spaces give rise to, and at the same time appear to generate supporting data for, scientific intuitions that are deeply flawed or wholly incorrect. We find these spaces useful in unfreezing scientific thinking and challenging the misplaced confidence in scientific intuition. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Fire Response of Loaded Composite Structures - Experiments and Modeling

    OpenAIRE

    Burdette, Jason A.

    2001-01-01

    In this work, the thermo-mechanical response and failure of loaded, fire-exposed composite structures was studied. Unique experimental equipment and procedures were developed and experiments were performed to assess the effects of mechanical loading and fire exposure on the service life of composite beams. A series of analytical models was assembled to describe the fire growth and structural response processes for the system used in the experiments. This series of models consists of a fire...

  15. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait

    NARCIS (Netherlands)

    Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph

    2016-01-01

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle–tendon (MT) model parameters for each of

  16. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait

    NARCIS (Netherlands)

    Carbone, V.; Krogt, M.M. van der; Koopman, H.F.J.M.; Verdonschot, N.J.

    2016-01-01

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of

  17. Engineering teacher training models and experiences

    Science.gov (United States)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  18. Sensitivity to cocaine in adult mice is due to interplay between genetic makeup, early environment and later experience.

    Science.gov (United States)

    Di Segni, Matteo; Andolina, Diego; Coassin, Alessandra; Accoto, Alessandra; Luchetti, Alessandra; Pascucci, Tiziana; Luzi, Carla; Lizzi, Anna Rita; D'Amato, Francesca R; Ventura, Rossella

    2017-10-01

    Although early aversive postnatal events are known to increase the risk to develop psychiatric disorders later in life, rarely they determine alone the nature and outcome of the psychopathology, indicating that interaction with genetic factors is crucial for expression of psychopathologies in adulthood. Moreover, it has been suggested that early life experiences could have negative consequences or confer adaptive value in different individuals. Here we suggest that resilience or vulnerability to adult cocaine sensitivity depends on a "triple interaction" between genetic makeup x early environment x later experience. We have recently showed that Repeated Cross Fostering (RCF; RCF pups were fostered by four adoptive mothers from postnatal day 1 to postnatal day 4. Pups were left with the last adoptive mother until weaning) experienced by pups affected the response to a negative experience in adulthood in opposite direction in two genotypes leading DBA2/J, but not C57BL/6J mice, toward an "anhedonia-like" phenotype. Here we investigate whether exposure to a rewarding stimulus, instead of a negative one, in adulthood induces an opposite behavioral outcome. To test this hypothesis, we investigated the long-lasting effects of RCF on cocaine sensitivity in C57 and DBA female mice by evaluating conditioned place preference induced by different cocaine doses and catecholamine prefrontal-accumbal response to cocaine using a "dual probe" in vivo microdialysis procedure. Moreover, cocaine-induced c-Fos activity was assessed in different brain regions involved in processing of rewarding stimuli. Finally, cocaine-induced spine changes were evaluated in the prefrontal-accumbal system. RCF experience strongly affected the behavioral, neurochemical and morphological responses to cocaine in adulthood in opposite direction in the two genotypes increasing and reducing, respectively, the sensitivity to cocaine in C57 and DBA mice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Mixed-time parallel evolution in multiple quantum NMR experiments: sensitivity and resolution enhancement in heteronuclear NMR

    International Nuclear Information System (INIS)

    Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad

    2007-01-01

    A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer

  20. Comprehensive study on parameter sensitivity for flow and nutrient modeling in the Hydrological Simulation Program Fortran model.

    Science.gov (United States)

    Luo, Chuan; Li, Zhaofu; Wu, Min; Jiang, Kaixia; Chen, Xiaomin; Li, Hengpeng

    2017-09-01

    Numerous parameters are used to construct the HSPF (Hydrological Simulation Program Fortran) model, which results in significant difficulty in calibrating the model. Parameter sensitivity analysis is an efficient method to identify important model parameters. Through this method, a model's calibration process can be simplified on the basis of understanding the model's structure. This study investigated the sensitivity of the flow and nutrient parameters of HSPF using the DSA (differential sensitivity analysis) method in the Xitiaoxi watershed, China. The results showed that flow was mostly affected by parameters related to groundwater and evapotranspiration, including DEEPFR (fraction of groundwater inflow to deep recharge), LZETP (lower-zone evapotranspiration parameter), and AGWRC (base groundwater recession), and most of the sensitive parameters had negative and nonlinear effects on flow. Additionally, nutrient components were commonly affected by parameters from land processes, including MON-SQOLIM (monthly values limiting storage of water quality in overland flow), MON-ACCUM (monthly values of accumulation), MON-IFLW-CONC (monthly concentration of water quality in interflow), and MON-GRND-CONC (monthly concentration of water quality in active groundwater). Besides, parameters from river systems, KATM20 (unit oxidation rate of total ammonia at 20 °C) had a negative and almost linear effect on ammonia concentration and MALGR (maximal unit algal growth rate for phytoplankton) had a negative and nonlinear effect on ammonia and orthophosphate concentrations. After calibrating these sensitive parameters, our model performed well for simulating flow and nutrient outputs, with R 2 and E NS (Nash-Sutcliffe efficiency) both greater than 0.75 for flow and greater than 0.5 for nutrient components. This study is expected to serve as a valuable complement to the documentation of the HSPF model to help users identify key parameters and provide a reference for performing

  1. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    Science.gov (United States)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more

  2. CAN LACK OF EXPERIENCE DELAY THE END OF THE SENSITIVE PHASE FOR SONG LEARNING

    NARCIS (Netherlands)

    SLATER, PJB; JONES, A; TENCATE, C

    1993-01-01

    Some bird species will modify their songs in adulthood, whereas in others, once developed, song appears relatively fixed. However, even in some of the latter, social experience may lead birds to learn songs later than was previously thought possible. Do age-limited learners really exist or is

  3. Undergraduates' Experience of Preparedness for Engaging with Sensitive Research Topics Using Qualitative Research

    Science.gov (United States)

    Simpson, Kerri L.; Wilson-Smith, Kevin

    2017-01-01

    This research explored the experience of five undergraduates who engaged with qualitative research as part of their final dissertation project. There have been concerns raised over the emotional safety of researchers carrying out qualitative research, which increases when researchers are inexperienced making this a poignant issues for lectures…

  4. Modelling and experiments on NTM stabilisation at ASDEX upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Urso, Laura

    2009-07-27

    In the next fusion device ITER the so-called neoclassical tearing modes (NTMs) are foreseen as being extremely detrimental to plasma confinement. This type of resistive instability is related to the presence in the plasma of magnetic islands. These are experimentally controlled with local electron cyclotron current drive (ECCD) and the island width decay during NTM stabilisation is modelled using the so-called Modified Rutherford equation. In this thesis, a modelling of the Modified Rutherford equation is carried out and simulations of the island width decay are compared with the experimentally observed ones in order to fit the two free machine-independent parameters present in the equation. A systematic study on a database of NTM stabilisation discharges from ASDEX Upgrade and JT-60U is done within the context of a multi-machine benchmark for extrapolating the ECCD power requirements for ITER. The experimental measurements in both devices are discussed by means of consistency checks and sensitivity analysis and used to evaluate the two fitting parameters present in the Modified Rutherford equation. The influence of the asymmetry of the magnetic island on stabilisation is for the first time included in the model and the effect of ECCD on the marginal island after which the mode naturally decays is quantified. The effect of radial misalignment and over-stabilisation during the experiment are found to be the key quantities affecting the NTM stabilisation. As a main result of this thesis, the extrapolation to ITER of the NTM stabilisation results from ASDEX Upgrade and JT-60U shows that 10MW of ECCD power are enough to stabilise large NTMs as long as the O-point of the island and the ECCD beam are perfectly aligned. In fact, the high ratio between the island size at saturation and the deposition width of the ECCD beam foreseen for ITER is found to imply a maximum allowable radial misalignment of 2-3 cm and little difference in terms of gained performance between

  5. Parameter sensitivity analysis for activated sludge models No. 1 and 3 combined with one-dimensional settling model.

    Science.gov (United States)

    Kim, J R; Ko, J H; Lee, J J; Kim, S H; Park, T J; Kim, C W; Woo, H J

    2006-01-01

    The aim of this study was to suggest a sensitivity analysis technique that can reliably predict effluent quality and minimize calibration efforts without being seriously affected by influent composition and parameter uncertainty in the activated sludge models No. 1 (ASM1) and No. 3 (ASM3) with a settling model. The parameter sensitivities for ASM1 and ASM3 were analyzed by three techniques such as SVM-Slope, RVM-SlopeMA, and RVM-AreaCRF. The settling model parameters were also considered. The selected highly sensitive parameters were estimated with a genetic algorithm, and the simulation results were compared as deltaEQ. For ASM1, the SVM-Slope technique proved to be an acceptable approach because it identified consistent sensitive parameter sets and presented smaller deltaEQ under every tested condition. For ASM3, no technique identified consistently sensitive parameters under different conditions. This phenomenon was regarded as the reflection of the high sensitivity of the ASM3 parameters. But it should be noted that the SVM-Slope technique presented reliable deltaEQ under every influent condition. Moreover, it was the simplest and easiest methodology for coding and quantification among those tested. Therefore, it was concluded that the SVM-Slope technique could be a reasonable approach for both ASM1 and ASM3.

  6. Sensitivity of open-water ice growth and ice concentration evolution in a coupled atmosphere-ocean-sea ice model

    Science.gov (United States)

    Shi, Xiaoxu; Lohmann, Gerrit

    2017-09-01

    A coupled atmosphere-ocean-sea ice model is applied to investigate to what degree the area-thickness distribution of new ice formed in open water affects the ice and ocean properties. Two sensitivity experiments are performed which modify the horizontal-to-vertical aspect ratio of open-water ice growth. The resulting changes in the Arctic sea-ice concentration strongly affect the surface albedo, the ocean heat release to the atmosphere, and the sea-ice production. The changes are further amplified through a positive feedback mechanism among the Arctic sea ice, the Atlantic Meridional Overturning Circulation (AMOC), and the surface air temperature in the Arctic, as the Fram Strait sea ice import influences the freshwater budget in the North Atlantic Ocean. Anomalies in sea-ice transport lead to changes in sea surface properties of the North Atlantic and the strength of AMOC. For the Southern Ocean, the most pronounced change is a warming along the Antarctic Circumpolar Current (ACC), owing to the interhemispheric bipolar seasaw linked to AMOC weakening. Another insight of this study lies on the improvement of our climate model. The ocean component FESOM is a newly developed ocean-sea ice model with an unstructured mesh and multi-resolution. We find that the subpolar sea-ice boundary in the Northern Hemisphere can be improved by tuning the process of open-water ice growth, which strongly influences the sea ice concentration in the marginal ice zone, the North Atlantic circulation, salinity and Arctic sea ice volume. Since the distribution of new ice on open water relies on many uncertain parameters and the knowledge of the detailed processes is currently too crude, it is a challenge to implement the processes realistically into models. Based on our sensitivity experiments, we conclude a pronounced uncertainty related to open-water sea ice growth which could significantly affect the climate system sensitivity.

  7. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  8. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  9. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  10. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... performance impacts drying behavior, the latter is informative with respect to the variables that primarily need to be controlled during continuous operation. In addition, several GSA techniques are analyzed and compared with respect to the correct conclusion and computational load. (c) 2014 American...

  11. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  12. Modeling of Yb3+-sensitized Er3+-doped silica waveguide amplifiers

    DEFF Research Database (Denmark)

    Lester, Christian; Bjarklev, Anders Overgaard; Rasmussen, Thomas

    1995-01-01

    A model for Yb3+-sensitized Er3+-doped silica waveguide amplifiers is described and numerically investigated in the small-signal regime. The amplified spontaneous emission in the ytterbium-band and the quenching process between excited erbium ions are included in the model. For pump wavelengths...... between 860 and 995 nm, the amplified spontaneous emission in the ytterbium-band is found to reduce both the gain and the optimum length of the amplifier significantly. The achievable gain of the Yb3+-sensitized amplifier is found to be higher than in an Er3+-doped silica waveguide without Yb 3+ (18 d...

  13. Design and Implementation of Integrated Surveillance and Modeling Systems for Climate-Sensitive Diseases

    Science.gov (United States)

    Wimberly, M. C.; Merkord, C. L.; Davis, J. K.; Liu, Y.; Henebry, G. M.; Hildreth, M. B.

    2016-12-01

    Climatic variations have a multitude of effects on human health, ranging from the direct impacts of extreme heat events to indirect effects on the vectors and hosts that transmit infectious diseases. Disease surveillance has traditionally focused on monitoring human cases, and in some instances tracking populations sizes and infection rates of arthropod vectors and zoonotic hosts. For climate-sensitive diseases, there is a potential to strengthen surveillance and obtain early indicators of future outbreaks by monitoring environmental risk factors using broad-scale sensor networks that include earth-observing satellites as well as ground stations. We highlight the opportunities and challenges of this integration by presenting modeling results and discussing lessons learned from two projects focused on surveillance and forecasting of mosquito-borne diseases. The Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessement (EPIDEMIA) project integrates malaria case surveillance with remotely-sensed environmental data for early detection of malaria epidemics in the Amhara region of Ethiopia and has been producing weekly forecast reports since 2015. The South Dakota Mosquito Information System (SDMIS) project similarly combines entomological surveillance with environmental monitoring to generate weekly maps for West Nile virus (WNV) in the north-central United States. We are currently implementing a new disease forecasting and risk reporting framework for the state of South Dakota during the 2016 WNV transmission season. Despite important differences in disease ecology and geographic setting, our experiences with these projects highlight several important lessons learned that can inform future efforts at disease early warning based on climatic predictors. These include the need to engage end users in system design from the outset, the critical role of automated workflows to facilitate the timely integration of multiple data streams

  14. ERO modeling and sensitivity analysis of locally enhanced beryllium erosion by magnetically connected antennas

    Science.gov (United States)

    Lasa, A.; Borodin, D.; Canik, J. M.; Klepper, C. C.; Groth, M.; Kirschner, A.; Airila, M. I.; Borodkina, I.; Ding, R.; Contributors, JET

    2018-01-01

    Experiments at JET showed locally enhanced, asymmetric beryllium (Be) erosion at outer wall limiters when magnetically connected ICRH antennas were in operation. A first modeling effort using the 3D erosion and scrape-off layer impurity transport modeling code ERO reproduced qualitatively the experimental outcome. However, local plasma parameters—in particular when 3D distributions are of interest—can be difficult to determine from available diagnostics and so erosion / impurity transport modeling input relies on output from other codes and simplified models, increasing uncertainties in the outcome. In the present contribution, we introduce and evaluate the impact of improved models and parameters with largest uncertainties of processes that impact impurity production and transport across the scrape-off layer, when simulated in ERO: (i) the magnetic geometry has been revised, for affecting the separatrix position (located 50-60 mm away from limiter surface) and thus the background plasma profiles; (ii) connection lengths between components, which lead to shadowing of ion fluxes, are also affected by the magnetic configuration; (iii) anomalous transport of ionized impurities, defined by the perpendicular diffusion coefficient, has been revisited; (iv) erosion yields that account for energy and angular distributions of background plasma ions under the present enhanced sheath potential and oblique magnetic field, have been introduced; (v) the effect of additional erosion sources, such as charge-exchange neutral fluxes, which are dominant in recessed areas like antennas, has been evaluated; (vi) chemically assisted release of Be in molecular form has been included. Sensitivity analysis highlights a qualitative effect (i.e. change in emission patterns) of magnetic shadowing, anomalous diffusion, and inclusion of neutral fluxes and molecular release of Be. The separatrix location, and energy and angular distribution of background plasma fluxes impact erosion

  15. General practitioners' experiences with provision of healthcare to patients with self-reported multiple chemical sensitivity

    DEFF Research Database (Denmark)

    Skovbjerg, Sine; Johansen, Jeanne Duus; Rasmussen, Alice

    2009-01-01

    OBJECTIVE: To describe general practitioners' (GPs') evaluation of and management strategies in relation to patients who seek medical advice because of multiple chemical sensitivity (MCS). DESIGN: A nationwide cross-sectional postal questionnaire survey. The survey included a sample of 1000 Danish...... GPs randomly drawn from the membership list of GPs in the Danish Medical Association. SETTING: Denmark. RESULTS: Completed questionnaires were obtained from 691 GPs (69%). Within the last 12 months 62.4% (n = 431) of the GPs had been consulted by at least one patient with MCS. Of these, 55.......2% of the GPs evaluated the patients' complaints as chronic and 46.2% stated that they were rarely able to meet the patients' expectations for healthcare. The majority, 73.5%, had referred patients to other medical specialties. The cause of MCS was perceived as multi-factorial by 64.3% of the GPs, as somatic...

  16. Sensitivity analysis for the study of influential parameters in tyre models

    OpenAIRE

    Kiébré, Rimyaledgo; Anstett-Collin, Floriane; Basset, Michel

    2011-01-01

    International audience; This paper studies two tyre models, the Fiala model and the Pacejka model. Both models are nonlinear and depend on parameters which must be identified from measurement data. A major problem is to efficiently prepare and plan the experiments. It is necessary to determine the parameters which have the greatest influence on the model output, and account for the output uncertainty which must be reduced. Therefore, the methodology presented here will help to carry out a var...

  17. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu [Department of Mathematics and Computer Science, Saint Mary’s College, Notre Dame, Indiana 46556 (United States); Anderson, David F., E-mail: anderson@math.wisc.edu [Department of Mathematics, University of Wisconsin—Madison, Madison, Wisconsin 53706 (United States)

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.

  18. Characterizing parameter sensitivity and uncertainty for a snow model across hydroclimatic regimes

    Science.gov (United States)

    He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.

    2011-01-01

    The National Weather Service (NWS) uses the SNOW17 model to forecast snow accumulation and ablation processes in snow-dominated watersheds nationwide. Successful application of the SNOW17 relies heavily on site-specific estimation of model parameters. The current study undertakes a comprehensive sensitivity and uncertainty analysis of SNOW17 model parameters using forcing and snow water equivalent (SWE) data from 12 sites with differing meteorological and geographic characteristics. The Generalized Sensitivity Analysis and the recently developed Differential Evolution Adaptive Metropolis (DREAM) algorithm are utilized to explore the parameter space and assess model parametric and predictive uncertainty. Results indicate that SNOW17 parameter sensitivity and uncertainty generally varies between sites. Of the six hydroclimatic characteristics studied, only air temperature shows strong correlation with the sensitivity and uncertainty ranges of two parameters, while precipitation is highly correlated with the uncertainty of one parameter. Posterior marginal distributions of two parameters are also shown to be site-dependent in terms of distribution type. The SNOW17 prediction ensembles generated by the DREAM-derived posterior parameter sets contain most of the observed SWE. The proposed uncertainty analysis provides posterior parameter information on parameter uncertainty and distribution types that can serve as a foundation for a data assimilation framework for hydrologic models.

  19. A simple and accurate model for Love wave based sensors: Dispersion equation and mass sensitivity

    Directory of Open Access Journals (Sweden)

    Jiansheng Liu

    2014-07-01

    Full Text Available Dispersion equation is an important tool for analyzing propagation properties of acoustic waves in layered structures. For Love wave (LW sensors, the dispersion equation with an isotropic-considered substrate is too rough to get accurate solutions; the full dispersion equation with a piezoelectric-considered substrate is too complicated to get simple and practical expressions for optimizing LW-based sensors. In this work, a dispersion equation is introduced for Love waves in a layered structure with an anisotropic-considered substrate and an isotropic guiding layer; an intuitive expression for mass sensitivity is also derived based on the dispersion equation. The new equations are in simple forms similar to the previously reported simplified model with an isotropic substrate. By introducing the Maxwell-Weichert model, these equations are also applicable to the LW device incorporating a viscoelastic guiding layer; the mass velocity sensitivity and the mass propagation loss sensitivity are obtained from the real part and the imaginary part of the complex mass sensitivity, respectively. With Love waves in an elastic SiO2 layer on an ST-90°X quartz structure, for example, comparisons are carried out between the velocities and normalized sensitivities calculated by using different dispersion equations and corresponding mass sensitivities. Numerical results of the method presented in this work are very close to those of the method with a piezoelectric-considered substrate. Another numerical calculation is carried out for the case of a LW sensor with a viscoelastic guiding layer. If the viscosity of the layer is not too big, the effect on the real part of the velocity and the mass velocity sensitivity is relatively small; the propagation loss and the mass loss sensitivity are proportional to the viscosity of the guiding layer.

  20. Spectral evaluation of Earth geopotential models and an experiment ...

    Indian Academy of Sciences (India)

    and an experiment on its regional improvement for geoid modelling. B Erol. Department of Geomatics Engineering, Civil Engineering Faculty,. Istanbul Technical University, Maslak 34469, Istanbul, Turkey. e-mail: bihter@itu.edu.tr. As the number of Earth geopotential models (EGM) grows with the increasing number of data ...

  1. Historical and idealized climate model experiments: an EMIC intercomparison

    DEFF Research Database (Denmark)

    Eby, M.; Weaver, A. J.; Alexander, K.

    2012-01-01

    Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and c...

  2. Model experiments related to outdoor propagation over an earth berm

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1994-01-01

    A series of scale model experiments related to outdoor propagation over an earth berm is described. The measurements are performed with a triggered spark source. The results are compared with data from an existing calculation model based upon uniform diffraction theory. Comparisons are made...

  3. Teaching Structures with Models : Experiences from Chile and the Netherlands

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Borgart, A.

    2012-01-01

    This paper states the importance of using scaled models for the teaching of structures in the curricula of Architecture and Structural Engineering studies. Based on 10 years’ experience working with models for different purposes, with a variety of materials and constructions methods, the authors

  4. Human strategic reasoning in dynamic games: Experiments, logics, cognitive models

    NARCIS (Netherlands)

    Ghosh, Sujata; Halder, Tamoghna; Sharma, Khyati; Verbrugge, Rineke

    2015-01-01

    © Springer-Verlag Berlin Heidelberg 2015.This article provides a three-way interaction between experiments, logic and cognitive modelling so as to bring out a shared perspective among these diverse areas, aiming towards better understanding and better modelling of human strategic reasoning in

  5. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA).

    Science.gov (United States)

    Schultz, Martin T; Lance, Richard F

    2015-01-01

    The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration

  6. Sensitivity analysis in oxidation ditch modelling: the effect of variations in stoichiometric, kinetic and operating parameters on the performance indices

    NARCIS (Netherlands)

    Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.

    2001-01-01

    This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis

  7. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  8. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  9. Sensitivity analyses of a global flood model in different geoclimatic regions

    Science.gov (United States)

    Moylan, C.; Neal, J. C.; Freer, J. E.; Pianosi, F.; Wagener, T.; Sampson, C. C.; Smith, A.

    2017-12-01

    Flood models producing global hazard maps now exist, although with significant variation in the modelled hazard extent. Besides explicit structural differences, reasons for this variation is unknown. Understanding the behaviour of these global flood models is necessary to determine how they can be further developed. Preliminary sensitivity analysis was performed using Morris method on the Bristol global flood model, which has 37 parameters, required to translate the remotely sensed data into input for the underlying hydrodynamic model. This number of parameters implies an excess of complexity for flood modelling and should ideally be mitigated. The analysis showed an order of magnitude difference in parameter sensitivities, when comparing total flooded extent. It also showed the most important parameters' influence to be highly interactive rather than just direct; there were surprises in expectation of which parameters are the most important. Despite these findings, conclusions about the model are limited due to the fixed geoclimatic features of the location analysed. Hence more locations with varied geoclimatic characteristics must be chosen, so the consistencies and deviations of parameter sensitivities across these features become quantifiable. Locations are selected using a novel sampling technique, which aggregates the input data of a domain into representative metrics of the geoclimatic features, hypothesised to correlate with one or more parameters. Combinations of these metrics are sampled across a range of geoclimatic areas, and the sensitivities found are correlated with the sampled metrics. From this work, we find the main influences on flood risk prediction at the global scale for the used model structure, which as a methodology is transferable to the other global flood models.

  10. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  11. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  12. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  13. Measurement Error in Designed Experiments for Second Order Models

    OpenAIRE

    McMahan, Angela Renee

    1997-01-01

    Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest ...

  14. Sensitivity Studies on the Influence of Aerosols on Cloud and Precipitation Development Using WRF Mesoscale Model Simulations

    Science.gov (United States)

    Thompson, G.; Eidhammer, T.; Rasmussen, R.

    2011-12-01

    Using the WRF model in simulations of shallow and deep precipitating cloud systems, we investigated the sensitivity to aerosols initiating as cloud condensation and ice nuclei. A global climatological dataset of sulfates, sea salts, and dust was used as input for a control experiment. Sensitivity experiments with significantly more polluted conditions were conducted to analyze the resulting impacts to cloud and precipitation formation. Simulations were performed using the WRF model with explicit treatment of aerosols added to the Thompson et al (2008) bulk microphysics scheme. The modified scheme achieves droplet formation using pre-tabulated CCN activation tables provided by a parcel model. The ice nucleation is parameterized as a function of dust aerosols as well as homogeneous freezing of deliquesced aerosols. The basic processes of aerosol activation and removal by wet scavenging are considered, but aerosol characteristic size or hygroscopicity does not change due to evaporating droplets. In other words, aerosol processing was ignored. Unique aspects of this study include the usage of one to four kilometer grid spacings and the direct parameterization of ice nucleation from aerosols rather than typical temperature and/or supersaturation relationships alone. Initial results from simulations of a deep winter cloud system and its interaction with significant orography show contrasting sensitivities in regions of warm rain versus mixed liquid and ice conditions. The classical view of higher precipitation amounts in relatively clean maritime clouds with fewer but larger droplets is confirmed for regions dominated by the warm-rain process. However, due to complex interactions with the ice phase and snow riming, the simulations revealed the reverse situation in high terrain areas dominated by snow reaching the surface. Results of other cloud systems will be summarized at the conference.

  15. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10 %) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which

  16. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    Science.gov (United States)

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  17. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  18. Carbon dynamics modelization and biological community sensitivity to temperature in an oligotrophic freshwater Antarctic lake

    DEFF Research Database (Denmark)

    Antonio Villaescusa, Juan; Jorgensen, Sven Erik; Rochera, Carlos

    2016-01-01

    , and therefore its abundance in lake water, greatly increased when temperature rise was higher. However, the highly variable meteorology over years in such an extreme environment causes that our model may fit well for some years, but fails to describe the system in years with contrasting meteorological...... conditions. Despite this assumption, the model reveals that maritime Antarctic lakes are very sensitive to temperature changes. This response can be monitored using eco-exergy, which allows a description of the system complexity. Due to this temperature sensitivity, the warming occurring in this area would...... interannual variability in the area of Byers Peninsula. With the aim of increasing the knowledge of this ecosystem and its sensibility to climate change as a model ecosystem, as well as to calibrate the extent of the interannual variability, a carbon flow model was developed partly describing its microbial...

  19. Sensitive Analysis for the Efficiency of a Parabolic Trough Solar Collector Based on Orthogonal Experiment

    Directory of Open Access Journals (Sweden)

    Xiaoyan Liu

    2015-01-01

    Full Text Available A multitude of the researches focus on the factors of the thermal efficiency of a parabolic trough solar collector, that is, the optical-thermal efficiency. However, it is limited to a single or double factors for available system. The aim of this paper is to investigate the multifactors effect on the system’s efficiency in cold climate region. Taking climatic performance into account, an average outlet temperature of LS-2 collector has been simulated successfully by coupling SolTrace software with CFD software. Effects of different factors on instantaneous efficiency have been determined by orthogonal experiment and single factor experiment. After that, the influence degree of different factors on the collector instantaneous efficiency is obtained clearly. The results show that the order of effect extent for average maximal deviation of each factor is inlet temperature, solar radiation intensity, diameter, flow rate, condensation area, pipe length, and ambient temperature. The encouraging results will provide a reference for the exploitation and utilization of parabolic trough solar collector in cold climate region.