WorldWideScience

Sample records for model sensitivity experiments

  1. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  2. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  3. 12th Rencontres du Vietnam : High Sensitivity Experiments Beyond the Standard Model

    CERN Document Server

    2016-01-01

    The goal of this workshop is to gather researchers, theoreticians, experimentalists and young scientists searching for physics beyond the Standard Model of particle physics using high sensitivity experiments. The standard model has been very successful in describing the particle physics world; the Higgs-Englert-Brout boson discovery is its last major discovery. Complementary to the high energy frontier explored at colliders, real opportunities for discovery exist at the precision frontier, testing fundamental symmetries and tracking small SM deviations.

  4. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  5. Model experiments on the sensitization of polyethylene cross-linking of oligobutadienes

    International Nuclear Information System (INIS)

    Brede, O.; Beckert, D.; Hoesselbarth, B.; Specht, W.; Tannert, F.; Wunsch, K.

    1988-01-01

    In presence of ≥ 1 % of 1,2-oligobutadiene the efficiency of the radiation-induced cross-linking of polyethylene was found to be increased in comparison to the pure matrix. Model experiments with solutions of the sensitizer in long chain n-alkanes showed that after addition of alkyl radicals onto the oligobutadiene (reaction with the vinyl groups) the sensitizer forms an own network which is grafted by the alkyl groups. In comparison to this grafting reaction proceeding with G of about 5 the vinyl consumption happened with about the threefold of it indicating a short (intra- and intermolecular) vinyl reaction chain. Pulse radiolysis measurements in solutions of the 1,2-oligobutadiene in n-hexadecane and in molten PE blends resulted in the observation of radical transients of the cross-linking reaction. (author)

  6. Sensitivity of the polypropylene to the strain rate: experiments and modeling

    International Nuclear Information System (INIS)

    Abdul-Latif, A.; Aboura, Z.; Mosleh, L.

    2002-01-01

    Full text.The main goal of this work is first to evaluate experimentally the strain rate dependent deformation of the polypropylene under tensile load; and secondly is to propose a model capable to appropriately describe the mechanical behavior of this material and especially its sensitivity to the strain rate. Several experimental tensile tests are performed at different quasi-static strain rates in the range of 10 -5 s -1 to 10 -1 s -1 . In addition to some relaxation tests are also conducted introducing the strain rate jumping state during testing. Within the framework of elastoviscoplasticity, a phenomenological model is developed for describing the non-linear mechanical behavior of the material under uniaxial loading paths. With the small strain assumption, the sensitivity of the polypropylene to the strain rate being of particular interest in this work, is accordingly taken into account. As a matter of fact, since this model is based on internal state variables, we assume thus that the material sensitivity to the strain rate is governed by the kinematic hardening variable notably its modulus and the accumulated viscoplastic strain. As far as the elastic behavior is concerned, it is noticed that such a behavior is slightly influenced by the employed strain rate rage. For this reason, the elastic behavior is classically determined, i.e. without coupling with the strain rate dependent deformation. It is obvious that the inelastic behavior of the used material is thoroughly dictated by the applied strain rate. Hence, the model parameters are well calibrated utilizing several experimental databases for different strain rates (10 -5 s -1 to 10 -1 s -1 ). Actually, among these experimental results, some experiments related to the relaxation phenomenon and strain rate jumping during testing (increasing or decreasing) are also used in order to more perfect the model parameters identification. To validate the calibrated model parameters, simulation tests are achieved

  7. Sensitivity experiments with a one-dimensional coupled plume - iceflow model

    Science.gov (United States)

    Beckmann, Johanna; Perette, Mahé; Alexander, David; Calov, Reinhard; Ganopolski, Andrey

    2016-04-01

    Over the last few decades Greenland Ice sheet mass balance has become increasingly negative, caused by enhanced surface melting and speedup of the marine-terminating outlet glaciers at the ice sheet margins. Glaciers speedup has been related, among other factors, to enhanced submarine melting, which in turn is caused by warming of the surrounding ocean and less obviously, by increased subglacial discharge. While ice-ocean processes potentially play an important role in recent and future mass balance changes of the Greenland Ice Sheet, their physical understanding remains poorly understood. In this work we performed numerical experiments with a one-dimensional plume model coupled to a one-dimensional iceflow model. First we investigated the sensitivity of submarine melt rate to changes in ocean properties (ocean temperature and salinity), to the amount of subglacial discharge and to the glacier's tongue geometry itself. A second set of experiments investigates the response of the coupled model, i.e. the dynamical response of the outlet glacier to altered submarine melt, which results in new glacier geometry and updated melt rates.

  8. Sensitivity analysis for CORSOR models simulating fission product release in LOFT-LP-FP-2 severe accident experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club; Pourgol-Mohammad, Mohammad [Sahand Univ. of Technology, Tabriz (Iran, Islamic Republic of). Dept. of Mechanical Engineering; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)

    2017-03-15

    This paper deals with simulation, sensitivity and uncertainty analysis of LP-FP-2 experiment of LOFT test facility. The test facility simulates the major components and system response of a pressurized water reactor during a LOCA. MELCOR code is used for predicting the fission product release from the core fuel elements in LOFT LP-FP-2 experiment. Moreover, sensitivity and uncertainty analysis is performed for different CORSOR models simulating release of fission products in severe accident calculations for nuclear power plants. The calculated values for the fission product release are compared under different modeling options to the experimental data available from the experiment. In conclusion, the performance of 8 CORSOR modeling options is assessed for available modeling alternatives in the code structure.

  9. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  10. Sensitivity experiments of a regional climate model to the different convective schemes over Central Africa

    Science.gov (United States)

    Armand J, K. M.

    2017-12-01

    In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.

  11. Sensitivity studies and a simple ozone perturbation experiment with a truncated two-dimensional model of the stratosphere

    Science.gov (United States)

    Stordal, Frode; Garcia, Rolando R.

    1987-01-01

    The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.

  12. Sensitivity of the Humboldt current system to global warming: a downscaling experiment of the IPSL-CM4 model

    Energy Technology Data Exchange (ETDEWEB)

    Echevin, Vincent [LOCEAN, Paris (France); Goubanova, Katerina; Dewitte, Boris [LEGOS, Toulouse (France); IMARPE, IGP, LEGOS, Lima (Peru); Belmadani, Ali [LOCEAN, Paris (France); LEGOS, Toulouse (France); University of Hawaii at Manoa, IPRC, International Pacific Research Center, SOEST, Honolulu, Hawaii (United States)

    2012-02-15

    The impact of climate warming on the seasonal variability of the Humboldt Current system ocean dynamics is investigated. The IPSL-CM4 large scale ocean circulation resulting from two contrasted climate scenarios, the so-called Preindustrial and quadrupling CO{sub 2}, are downscaled using an eddy-resolving regional ocean circulation model. The intense surface heating by the atmosphere in the quadrupling CO{sub 2} scenario leads to a strong increase of the surface density stratification, a thinner coastal jet, an enhanced Peru-Chile undercurrent, and an intensification of nearshore turbulence. Upwelling rates respond quasi-linearly to the change in wind stress associated with anthropogenic forcing, and show a moderate decrease in summer off Peru and a strong increase off Chile. Results from sensitivity experiments show that a 50% wind stress increase does not compensate for the surface warming resulting from heat flux forcing and that the associated mesoscale turbulence increase is a robust feature. (orig.)

  13. The role of soil moisture in land surface-atmosphere coupling: climate model sensitivity experiments over India

    Science.gov (United States)

    Williams, Charles; Turner, Andrew

    2015-04-01

    It is generally acknowledged that anthropogenic land use changes, such as a shift from forested land into irrigated agriculture, may have an impact on regional climate and, in particular, rainfall patterns in both time and space. India provides an excellent example of a country in which widespread land use change has occurred during the last century, as the country tries to meet its growing demand for food. Of primary concern for agriculture is the Indian summer monsoon (ISM), which displays considerable seasonal and subseasonal variability. Although it is evident that changing rainfall variability will have a direct impact on land surface processes (such as soil moisture variability), the reverse impact is less well understood. However, the role of soil moisture in the coupling between the land surface and atmosphere needs to be properly explored before any potential impact of changing soil moisture variability on ISM rainfall can be understood. This paper attempts to address this issue, by conducting a number of sensitivity experiments using a state-of-the-art climate model from the UK Meteorological Office Hadley Centre: HadGEM2. Several experiments are undertaken, with the only difference between them being the extent to which soil moisture is coupled to the atmosphere. Firstly, the land surface is fully coupled to the atmosphere, globally (as in standard model configurations); secondly, the land surface is entirely uncoupled from the atmosphere, again globally, with soil moisture values being prescribed on a daily basis; thirdly, the land surface is uncoupled from the atmosphere over India but fully coupled elsewhere; and lastly, vice versa (i.e. the land surface is coupled to the atmosphere over India but uncoupled elsewhere). Early results from this study suggest certain 'hotspot' regions where the impact of soil moisture coupling/uncoupling may be important, and many of these regions coincide with previous studies. Focusing on the third experiment, i

  14. Context Sensitive Modeling of Cancer Drug Sensitivity.

    Directory of Open Access Journals (Sweden)

    Bo-Juen Chen

    Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.

  15. Lessening Sensitivity: Student Experiences of Teaching and Learning Sensitive Issues

    Science.gov (United States)

    Lowe, Pam

    2015-01-01

    Despite growing interest in learning and teaching as emotional activities, there is still very little research on experiences of sensitive issues. Using qualitative data from students from a range of social science disciplines, this study investigates student's experiences. The paper highlights how, although they found it difficult and distressing…

  16. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  17. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  18. ATLAS MDT neutron sensitivity measurement and modeling

    International Nuclear Information System (INIS)

    Ahlen, S.; Hu, G.; Osborne, D.; Schulz, A.; Shank, J.; Xu, Q.; Zhou, B.

    2003-01-01

    The sensitivity of the ATLAS precision muon detector element, the Monitored Drift Tube (MDT), to fast neutrons has been measured using a 5.5 MeV Van de Graaff accelerator. The major mechanism of neutron-induced signals in the drift tubes is the elastic collisions between the neutrons and the gas nuclei. The recoil nuclei lose kinetic energy in the gas and produce the signals. By measuring the ATLAS drift tube neutron-induced signal rate and the total neutron flux, the MDT neutron signal sensitivities were determined for different drift gas mixtures and for different neutron beam energies. We also developed a sophisticated simulation model to calculate the neutron-induced signal rate and signal spectrum for ATLAS MDT operation configurations. The calculations agree with the measurements very well. This model can be used to calculate the neutron sensitivities for different gaseous detectors and for neutron energies above those available to this experiment

  19. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  20. Victimization Experiences and the Stabilization of Victim Sensitivity

    Directory of Open Access Journals (Sweden)

    Mario eGollwitzer

    2015-04-01

    Full Text Available People reliably differ in the extent to which they are sensitive to being victimized by others. Importantly, victim sensitivity predicts how people behave in social dilemma situations: Victim-sensitive individuals are less likely to trust others and more likely to behave uncooperatively - especially in socially uncertain situations. This pattern can be explained with the Sensitivity to Mean Intentions (SeMI model, according to which victim sensitivity entails a specific and asymmetric sensitivity to contextual cues that are associated with untrustworthiness. Recent research is largely in line with the model’s prediction, but some issues have remained conceptually unresolved so far. For instance, it is unclear why and how victim sensitivity becomes a stable trait and which developmental and cognitive processes are involved in such stabilization. In the present article, we will discuss the psychological processes that contribute to a stabilization of victim sensitivity within persons, both across the life span (ontogenetic stabilization and across social situations (actual-genetic stabilization. Our theoretical framework starts from the assumption that experiences of being exploited threaten a basic need, the need to trust. This need is so fundamental that experiences that threaten it receive a considerable amount of attention and trigger strong affective reactions. Associative learning processes can then explain (a how certain contextual cues (e.g., facial expressions become conditioned stimuli that elicit equally strong responses, (b why these contextual untrustworthiness cues receive much more attention than, for instance, trustworthiness cues, and (c how these cues shape spontaneous social expectations (regarding other people’s intentions. Finally, avoidance learning can explain why these cognitive processes gradually stabilize and become a trait: the trait which is referred to as victim sensitivity.

  1. Neutrino Oscillation Parameter Sensitivity in Future Long-Baseline Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bass, Matthew [Colorado State Univ., Fort Collins, CO (United States)

    2014-01-01

    The study of neutrino interactions and propagation has produced evidence for physics beyond the standard model and promises to continue to shed light on rare phenomena. Since the discovery of neutrino oscillations in the late 1990s there have been rapid advances in establishing the three flavor paradigm of neutrino oscillations. The 2012 discovery of a large value for the last unmeasured missing angle has opened the way for future experiments to search for charge-parity symmetry violation in the lepton sector. This thesis presents an analysis of the future sensitivity to neutrino oscillations in the three flavor paradigm for the T2K, NO A, LBNE, and T2HK experiments. The theory of the three flavor paradigm is explained and the methods to use these theoretical predictions to design long baseline neutrino experiments are described. The sensitivity to the oscillation parameters for each experiment is presented with a particular focus on the search for CP violation and the measurement of the neutrino mass hierarchy. The variations of these sensitivities with statistical considerations and experimental design optimizations taken into account are explored. The effects of systematic uncertainties in the neutrino flux, interaction, and detection predictions are also considered by incorporating more advanced simulations inputs from the LBNE experiment.

  2. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  3. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  4. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  5. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    storage systems, where the actual values of the data is not relevant for the behavior of the system. For many systems the values are important. For instance the control flow of the system can be dependent on the input values. We call this type of system data sensitive, as the execution is sensitive...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...

  6. Sensitivity analysis using DECOMP and METOXA subroutines of the MAAP code in modelling core concrete interaction phenomena and post test calculations for ACE-MCCI experiment L-5

    International Nuclear Information System (INIS)

    Passalacqua, R.A.

    1991-01-01

    A parametric analysis approach was chosen in order to study core-concrete interaction phenomena. The analysis was performed using a stand-alone version of the MAAP-DECOMP model (DOE version). This analysis covered only those parameters known to have the largest effect on thermohydraulics and fission product aerosol release. Even though the main purpose of the effort was model validation, it eventually resulted in a better understanding of the core-concrete interaction physics and to a more correct interpretation of the ACE-MCCI L5 experimental data. Unusual low heat transfer fluxes from the debris pool to the cavity (corium surrounding volume) were modeled in order to have a good benchmark with the experimental data. Therefore, higher debris pool temperatures were predicted. In case of water flooding, as a consequence of the critical heat flux through the upper crust and the increase of the crust thickness, resulting high debris pool temperatures cause an increase in the concrete ablation rate in the short term. DECOMP model predicts a quick increase of the crust thickness and as a result, causes the quenching of the molten mass. However, especially for fast transient, phenomena of crust bridge formation can occur. Thus, the upward directed heat flux is minimized and the concrete erosion rate remains conspicuous also in the long term. The model validation is based, in these calculations, on post-test predictions using the MCCI L5 test data: these data are derived from results of the 'Molten Core Concrete Interaction' (MCCI) experiments, which in turn are part of the larger Advanced Containment Experiment (ACE) program. Other calculations were also performed for the new proposed MACE (Melt Debris Attack and Coolability) experiments simulating the water flooding of the cavity. Those calculations are preliminarily compared with the recent MACE scoping test results. (author) 4 tabs., 59 figs., 5 refs

  7. Sensitivities to neutrino electromagnetic properties at the TEXONO experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kosmas, T.S., E-mail: hkosmas@uoi.gr [Division of Theoretical Physics, University of Ioannina, GR 45110 Ioannina (Greece); Miranda, O.G., E-mail: omr@fis.cinvestav.mx [Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN, Apdo. Postal 14-740 07000 Mexico, DF (Mexico); Papoulias, D.K., E-mail: dimpap@cc.uoi.gr [Division of Theoretical Physics, University of Ioannina, GR 45110 Ioannina (Greece); AHEP Group, Instituto de Física Corpuscular – C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, C/Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain); Tórtola, M., E-mail: mariam@ific.uv.es [AHEP Group, Instituto de Física Corpuscular – C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, C/Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain); Valle, J.W.F. [AHEP Group, Instituto de Física Corpuscular – C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, C/Catedratico José Beltrán, 2 E-46980 Paterna (València) (Spain)

    2015-11-12

    The possibility of measuring neutral-current coherent elastic neutrino–nucleus scattering (CENNS) at the TEXONO experiment has opened high expectations towards probing exotic neutrino properties. Focusing on low threshold Germanium-based targets with kg-scale mass, we find a remarkable efficiency not only for detecting CENNS events due to the weak interaction, but also for probing novel electromagnetic neutrino interactions. Specifically, we demonstrate that such experiments are complementary in performing precision Standard Model tests as well as in shedding light on sub-leading effects due to neutrino magnetic moment and neutrino charge radius. This work employs realistic nuclear structure calculations based on the quasi-particle random phase approximation (QRPA) and takes into consideration the crucial quenching effect corrections. Such a treatment, in conjunction with a simple statistical analysis, shows that the attainable sensitivities are improved by one order of magnitude as compared to previous studies.

  8. Triggers for a high sensitivity charm experiment

    International Nuclear Information System (INIS)

    Christian, D.C.

    1994-07-01

    Any future charm experiment clearly should implement an E T trigger and a μ trigger. In order to reach the 10 8 reconstructed charm level for hadronic final states, a high quality vertex trigger will almost certainly also be necessary. The best hope for the development of an offline quality vertex trigger lies in further development of the ideas of data-driven processing pioneered by the Nevis/U. Mass. group

  9. Biosphere assessment for high-level radioactive waste disposal: modelling experiences and discussion on key parameters by sensitivity analysis in JNC

    International Nuclear Information System (INIS)

    Kato, Tomoko; Makino, Hitoshi; Uchida, Masahiro; Suzuki, Yuji

    2004-01-01

    In the safety assessment of the deep geological disposal system of the high-level radioactive waste (HLW), biosphere assessment is often necessary to estimate future radiological impacts on human beings (e.g. radiation dose). In order to estimate the dose, the surface environment (biosphere) into which future releases of radionuclides might occur and the associated future human behaviour needs to be considered. However, for a deep repository, such releases might not occur for many thousands of years after disposal. Over such timescales, it is impossible to predict with any certainty how the biosphere and human behaviour will evolve. To avoid endless speculation aimed at reducing such uncertainty, the 'Reference Biospheres' concept has been developed for use in the safety assessment of HLW disposal. As the aim of the safety assessment with a hypothetical HLW disposal system by JNC was to demonstrate the technical feasibility and reliability of the Japanese disposal concept for a range of geological and surface environments, some biosphere models were developed using the 'Reference Biospheres' concept and the BIOMASS Methodology. These models have been used to derive factors to convert the radionuclide flux from a geosphere to a biosphere into a dose (flux to dose conversion factors). Moreover, sensitivity analysis for parameters in the biosphere models was performed to evaluate and understand the relative importance of parameters. It was concluded that transport parameters in the surface environments, annual amount of food consumption, distribution coefficients on soils and sediments, transfer coefficients of radionuclides to animal products and concentration ratios for marine organisms would have larger influence on the flux to dose conversion factors than any other parameters. (author)

  10. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  11. Simulation of High-Latitude Hydrological Processes in the Torne-Kalix Basin: PILPS Phase 2(e). 3; Equivalent Model Representation and Sensitivity Experiments

    Science.gov (United States)

    Bowling, Laura C.; Lettenmaier, Dennis P.; Nijssen, Bart; Polcher, Jan; Koster, Randal D.; Lohmann, Dag; Houser, Paul R. (Technical Monitor)

    2002-01-01

    The Project for Intercomparison of Land Surface Parameterization Schemes (PILPS) Phase 2(e) showed that in cold regions the annual runoff production in Land Surface Schemes (LSSs) is closely related to the maximum snow accumulation, which in turn is controlled in large part by winter sublimation. To help further explain the relationship between snow cover, turbulent exchanges and runoff production, a simple equivalent model-(SEM) was devised to reproduce the seasonal and annual fluxes simulated by 13 LSSs that participated in PILPS Phase 2(e). The design of the SEM relates the annual partitioning of precipitation and energy in the LSSs to three primary parameters: snow albedo, effective aerodynamic resistance and evaporation efficiency. Isolation of each of the parameters showed that the annual runoff production was most sensitive to the aerodynamic resistance. The SEM was somewhat successful in reproducing the observed LSS response to a decrease in shortwave radiation and changes in wind speed forcings. SEM parameters derived from the reduced shortwave forcings suggested that increased winter stability suppressed turbulent heat fluxes over snow. Because winter sensible heat fluxes were largely negative, reductions in winter shortwave radiation imply an increase in annual average sensible heat.

  12. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  13. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  14. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  15. Quick, sensitive serial NMR experiments with Radon transform.

    Science.gov (United States)

    Dass, Rupashree; Kasprzak, Paweł; Kazimierczuk, Krzysztof

    2017-09-01

    The Radon transform is a potentially powerful tool for processing the data from serial spectroscopic experiments. It makes it possible to decode the rate at which frequencies of spectral peaks shift under the effect of changing conditions, such as temperature, pH, or solvent. In this paper we show how it also improves speed and sensitivity, especially in multidimensional experiments. This is particularly important in the case of low-sensitivity techniques, such as NMR spectroscopy. As an example, we demonstrate how Radon transform processing allows serial measurements of 15 N-HSQC spectra of unlabelled peptides that would otherwise be infeasible. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Superconducting gravity gradiometer for sensitive gravity measurements. II. Experiment

    International Nuclear Information System (INIS)

    Chan, H.A.; Moody, M.V.; Paik, H.J.

    1987-01-01

    A sensitive superconducting gravity gradiometer has been constructed and tested. Coupling to gravity signals is obtained by having two superconducting proof masses modulate magnetic fields produced by persistent currents. The induced electrical currents are differenced by a passive superconducting circuit coupled to a superconducting quantum interference device. The experimental behavior of this device has been shown to follow the theoretical model closely in both signal transfer and noise characteristics. While its intrinsic noise level is shown to be 0.07 E Hz/sup -1/2/ (1 Eequivalent10/sup -9/ sec/sup -2/), the actual performance of the gravity gradiometer on a passive platform has been limited to 0.3--0.7 E Hz/sup -1/2/ due to its coupling to the environmental noise. The detailed structure of this excess noise is understood in terms of an analytical error model of the instrument. The calibration of the gradiometer has been obtained by two independent methods: by applying a linear acceleration and a gravity signal in two different operational modes of the instrument. This device has been successfully operated as a detector in a new null experiment for the gravitational inverse-square law. In this paper we report the design, fabrication, and detailed test results of the superconducting gravity gradiometer. We also present additional theoretical analyses which predict the specific dynamic behavior of the gradiometer and of the test

  17. The mobilisation model and parameter sensitivity

    International Nuclear Information System (INIS)

    Blok, B.M.

    1993-12-01

    In the PRObabillistic Safety Assessment (PROSA) of radioactive waste in a salt repository one of the nuclide release scenario's is the subrosion scenario. A new subrosion model SUBRECN has been developed. In this model the combined effect of a depth-dependent subrosion, glass dissolution, and salt rise has been taken into account. The subrosion model SUBRECN and the implementation of this model in the German computer program EMOS4 is presented. A new computer program PANTER is derived from EMOS4. PANTER models releases of radionuclides via subrosion from a disposal site in a salt pillar into the biosphere. For uncertainty and sensitivity analyses the new subrosion model Latin Hypercube Sampling has been used for determine the different values for the uncertain parameters. The influence of the uncertainty in the parameters on the dose calculations has been investigated by the following sensitivity techniques: Spearman Rank Correlation Coefficients, Partial Rank Correlation Coefficients, Standardised Rank Regression Coefficients, and the Smirnov Test. (orig./HP)

  18. Vantage sensitivity: individual differences in response to positive experiences.

    Science.gov (United States)

    Pluess, Michael; Belsky, Jay

    2013-07-01

    The notion that some people are more vulnerable to adversity as a function of inherent risk characteristics is widely embraced in most fields of psychology. This is reflected in the popularity of the diathesis-stress framework, which has received a vast amount of empirical support over the years. Much less effort has been directed toward the investigation of endogenous factors associated with variability in response to positive influences. One reason for the failure to investigate individual differences in response to positive experiences as a function of endogenous factors may be the absence of adequate theoretical frameworks. According to the differential-susceptibility hypothesis, individuals generally vary in their developmental plasticity regardless of whether they are exposed to negative or positive influences--a notion derived from evolutionary reasoning. On the basis of this now well-supported proposition, we advance herein the new concept of vantage sensitivity, reflecting variation in response to exclusively positive experiences as a function of individual endogenous characteristics. After distinguishing vantage sensitivity from theoretically related concepts of differential-susceptibility and resilience, we review some recent empirical evidence for vantage sensitivity featuring behavioral, physiological, and genetic factors as moderators of a wide range of positive experiences ranging from family environment and psychotherapy to educational intervention. Thereafter, we discuss genetic and environmental factors contributing to individual differences in vantage sensitivity, potential mechanisms underlying vantage sensitivity, and practical implications. 2013 APA, all rights reserved

  19. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  20. Projected sensitivity of the SuperCDMS SNOLAB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Agnese, R.; Anderson, A. J.; Aramaki, T.; Arnquist, I.; Baker, W.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Borgland, A.; Bowles, M. A.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Calkins, R.; Cartaro, C.; Cerdeño, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Gerbier, G.; Ghaith, M.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hong, Z.; Hoppe, E.; Hsu, L.; Huber, M. E.; Iyer, V.; Jardin, D.; Jastram, A.; Kelsey, M. H.; Kennedy, A.; Kubik, A.; Kurinsky, N. A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; Mandic, V.; Mast, N.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Orrell, J. L.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Poudel, S.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Roberts, A.; Robinson, A. E.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Serfass, B.; Speller, D.; Stein, M.; Street, J.; Tanaka, H. A.; Toback, D.; Underwood, R.; Villano, A. N.; von Krosigk, B.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, X.; Zhao, X.

    2017-04-07

    SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass (< 10 GeV/c$^2$) particles that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~ 1 x 10$^{-43}$ cm$^2$ for a dark matter particle mass of 1 GeV/c$^2$, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced $^{3}$H and naturally occurring $^{32}$Si will be present in the detectors at some level. Even if these backgrounds are x10 higher than expected, the science reach of the HV detectors would be over three orders of magnitude beyond current results for a dark matter mass of 1 GeV/c$^2$. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particle masses (> 5 GeV/c$^2$). The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the "neutrino floor", where coherent scatters of solar neutrinos become a limiting background.

  1. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  2. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  3. The sensitivity of the ESA DELTA model

    Science.gov (United States)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  4. An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2004-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a

  5. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  6. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  7. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  8. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer Hougaard

    2012-01-01

    Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...... the underlying neural encoding of an experiment defining multiple brain states. In this relation there is a great desire for the researcher to generate brain maps, that highlight brain locations of importance to the classifiers decisions. Based on sensitivity analysis, we develop further procedures for model...... direction the individual locations influence the classification. We illustrate the visualization procedure on a real data from a simple functional magnetic resonance imaging experiment....

  9. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  10. Projected sensitivity of the SuperCDMS SNOLAB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Agnese, R.; Anderson, A. J.; Aramaki, T.; Arnquist, I.; Baker, W.; Barker, D.; Basu Thakur, R.; Bauer, D. A.; Borgland, A.; Bowles, M. A.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Calkins, R.; Cartaro, C.; Cerdeño, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Gerbier, G.; Ghaith, M.; Godfrey, G. L.; Golwala, S. R.; Hall, J.; Harris, H. R.; Hofer, T.; Holmgren, D.; Hong, Z.; Hoppe, E.; Hsu, L.; Huber, M. E.; Iyer, V.; Jardin, D.; Jastram, A.; Kelsey, M. H.; Kennedy, A.; Kubik, A.; Kurinsky, N. A.; Leder, A.; Loer, B.; Lopez Asamar, E.; Lukens, P.; Mahapatra, R.; Mandic, V.; Mast, N.; Mirabolfathi, N.; Moffatt, R. A.; Morales Mendoza, J. D.; Orrell, J. L.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Poudel, S.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Roberts, A.; Robinson, A. E.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Serfass, B.; Speller, D.; Stein, M.; Street, J.; Tanaka, H. A.; Toback, D.; Underwood, R.; Villano, A. N.; von Krosigk, B.; Welliver, B.; Wilson, J. S.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, X.; Zhao, X.

    2017-04-01

    SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass particles (with masses ≤ 10 GeV/c^2) that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~1×10^-43 cm^2 for a dark matter particle mass of 1 GeV/c^2, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low-energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced H-3 and naturally occurring Si-32 will be present in the detectors at some level. Even if these backgrounds are 10 times higher than expected, the science reach of the HV detectors would be over 3 orders of magnitude beyond current results for a dark matter mass of 1 GeV/c^2. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particles with masses ≳5 GeV/c^2. The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the “neutrino floor,” where coherent scatters of solar neutrinos become a limiting background.

  11. Vantage Sensitivity: Environmental Sensitivity to Positive Experiences as a Function of Genetic Differences.

    Science.gov (United States)

    Pluess, Michael

    2017-02-01

    A large number of gene-environment interaction studies provide evidence that some people are more likely to be negatively affected by adverse experiences as a function of specific genetic variants. However, such "risk" variants are surprisingly frequent in the population. Evolutionary analysis suggests that genetic variants associated with increased risk for maladaptive development under adverse environmental conditions are maintained in the population because they are also associated with advantages in response to different contextual conditions. These advantages may include (a) coexisting genetic resilience pertaining to other adverse influences, (b) a general genetic susceptibility to both low and high environmental quality, and (c) a coexisting propensity to benefit disproportionately from positive and supportive exposures, as reflected in the recent framework of vantage sensitivity. After introducing the basic properties of vantage sensitivity and highlighting conceptual similarities and differences with diathesis-stress and differential susceptibility patterns of gene-environment interaction, selected and recent empirical evidence for the notion of vantage sensitivity as a function of genetic differences is reviewed. The unique contribution that the new perspective of vantage sensitivity may make to our understanding of social inequality will be discussed after suggesting neurocognitive and molecular mechanisms hypothesized to underlie the propensity to benefit disproportionately from benevolent experiences. © 2015 Wiley Periodicals, Inc.

  12. Sensitivity of system stability to model structure

    Science.gov (United States)

    Hosack, G.R.; Li, H.W.; Rossignol, P.A.

    2009-01-01

    A community is stable, and resilient, if the levels of all community variables can return to the original steady state following a perturbation. The stability properties of a community depend on its structure, which is the network of direct effects (interactions) among the variables within the community. These direct effects form feedback cycles (loops) that determine community stability. Although feedback cycles have an intuitive interpretation, identifying how they form the feedback properties of a particular community can be intractable. Furthermore, determining the role that any specific direct effect plays in the stability of a system is even more daunting. Such information, however, would identify important direct effects for targeted experimental and management manipulation even in complex communities for which quantitative information is lacking. We therefore provide a method that determines the sensitivity of community stability to model structure, and identifies the relative role of particular direct effects, indirect effects, and feedback cycles in determining stability. Structural sensitivities summarize the degree to which each direct effect contributes to stabilizing feedback or destabilizing feedback or both. Structural sensitivities prove useful in identifying ecologically important feedback cycles within the community structure and for detecting direct effects that have strong, or weak, influences on community stability. The approach may guide the development of management intervention and research design. We demonstrate its value with two theoretical models and two empirical examples of different levels of complexity. ?? 2009 Elsevier B.V. All rights reserved.

  13. Stress Sensitivity, Aberrant Salience, and Threat Anticipation in Early Psychosis: An Experience Sampling Study.

    Science.gov (United States)

    Reininghaus, Ulrich; Kempton, Matthew J; Valmaggia, Lucia; Craig, Tom K J; Garety, Philippa; Onyejiaka, Adanna; Gayer-Anderson, Charlotte; So, Suzanne H; Hubbard, Kathryn; Beards, Stephanie; Dazzan, Paola; Pariante, Carmine; Mondelli, Valeria; Fisher, Helen L; Mills, John G; Viechtbauer, Wolfgang; McGuire, Philip; van Os, Jim; Murray, Robin M; Wykes, Til; Myin-Germeys, Inez; Morgan, Craig

    2016-05-01

    While contemporary models of psychosis have proposed a number of putative psychological mechanisms, how these impact on individuals to increase intensity of psychotic experiences in real life, outside the research laboratory, remains unclear. We aimed to investigate whether elevated stress sensitivity, experiences of aberrant novelty and salience, and enhanced anticipation of threat contribute to the development of psychotic experiences in daily life. We used the experience sampling method (ESM) to assess stress, negative affect, aberrant salience, threat anticipation, and psychotic experiences in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls with no personal or family history of psychosis. Linear mixed models were used to account for the multilevel structure of ESM data. In all 3 groups, elevated stress sensitivity, aberrant salience, and enhanced threat anticipation were associated with an increased intensity of psychotic experiences. However, elevated sensitivity to minor stressful events (χ(2)= 6.3,P= 0.044), activities (χ(2)= 6.7,P= 0.036), and areas (χ(2)= 9.4,P= 0.009) and enhanced threat anticipation (χ(2)= 9.3,P= 0.009) were associated with more intense psychotic experiences in FEP individuals than controls. Sensitivity to outsider status (χ(2)= 5.7,P= 0.058) and aberrantly salient experiences (χ(2)= 12.3,P= 0.002) were more strongly associated with psychotic experiences in ARMS individuals than controls. Our findings suggest that stress sensitivity, aberrant salience, and threat anticipation are important psychological processes in the development of psychotic experiences in daily life in the early stages of the disorder. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.

  14. Stress Sensitivity, Aberrant Salience, and Threat Anticipation in Early Psychosis: An Experience Sampling Study

    Science.gov (United States)

    Reininghaus, Ulrich; Kempton, Matthew J.; Valmaggia, Lucia; Craig, Tom K. J.; Garety, Philippa; Onyejiaka, Adanna; Gayer-Anderson, Charlotte; So, Suzanne H.; Hubbard, Kathryn; Beards, Stephanie; Dazzan, Paola; Pariante, Carmine; Mondelli, Valeria; Fisher, Helen L.; Mills, John G.; Viechtbauer, Wolfgang; McGuire, Philip; van Os, Jim; Murray, Robin M.; Wykes, Til; Myin-Germeys, Inez; Morgan, Craig

    2016-01-01

    While contemporary models of psychosis have proposed a number of putative psychological mechanisms, how these impact on individuals to increase intensity of psychotic experiences in real life, outside the research laboratory, remains unclear. We aimed to investigate whether elevated stress sensitivity, experiences of aberrant novelty and salience, and enhanced anticipation of threat contribute to the development of psychotic experiences in daily life. We used the experience sampling method (ESM) to assess stress, negative affect, aberrant salience, threat anticipation, and psychotic experiences in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls with no personal or family history of psychosis. Linear mixed models were used to account for the multilevel structure of ESM data. In all 3 groups, elevated stress sensitivity, aberrant salience, and enhanced threat anticipation were associated with an increased intensity of psychotic experiences. However, elevated sensitivity to minor stressful events (χ2 = 6.3, P = 0.044), activities (χ2 = 6.7, P = 0.036), and areas (χ2 = 9.4, P = 0.009) and enhanced threat anticipation (χ2 = 9.3, P = 0.009) were associated with more intense psychotic experiences in FEP individuals than controls. Sensitivity to outsider status (χ2 = 5.7, P = 0.058) and aberrantly salient experiences (χ2 = 12.3, P = 0.002) were more strongly associated with psychotic experiences in ARMS individuals than controls. Our findings suggest that stress sensitivity, aberrant salience, and threat anticipation are important psychological processes in the development of psychotic experiences in daily life in the early stages of the disorder. PMID:26834027

  15. MARKETING MODELS APPLICATION EXPERIENCE

    Directory of Open Access Journals (Sweden)

    A. Yu. Rymanov

    2011-01-01

    Full Text Available Marketing models are used for the assessment of such marketing elements as sales volume, market share, market attractiveness, advertizing costs, product pushing and selling, profit, profitableness. Classification of buying process decision taking models is presented. SWOT- and GAPbased models are best for selling assessments. Lately, there is a tendency to transfer from the assessment on the ba-sis of financial indices to that on the basis of those non-financial. From the marketing viewpoint, most important are long-term company activities and consumer drawingmodels as well as market attractiveness operative models.

  16. Modelling Urban Experiences

    DEFF Research Database (Denmark)

    Jantzen, Christian; Vetner, Mikael

    2008-01-01

    How can urban designers develop an emotionally satisfying environment not only for today's users but also for coming generations? Which devices can they use to elicit interesting and relevant urban experiences? This paper attempts to answer these questions by analyzing the design of Zuidas, a new...

  17. Sensitivity of the ATLAS experiment to discover the decay H{yields} {tau}{tau} {yields}ll+4{nu} of the Standard Model Higgs Boson produced in vector boson fusion

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, Martin

    2011-05-17

    A study of the expected sensitivity of the ATLAS experiment to discover the Standard Model Higgs boson produced via vector boson fusion (VBF) and its decay to H{yields} {tau}{tau}{yields} ll+4{nu} is presented. The study is based on simulated proton-proton collisions at a centre-of-mass energy of 14 TeV. For the first time the discovery potential is evaluated in the presence of additional proton-proton interactions (pile-up) to the process of interest in a complete and consistent way. Special emphasis is placed on the development of background estimation techniques to extract the main background processes Z{yields}{tau}{tau} and t anti t production using data. The t anti t background is estimated using a control sample selected with the VBF analysis cuts and the inverted b-jet veto. The dominant background process Z{yields}{tau}{tau} is estimated using Z{yields}{mu}{mu} events. Replacing the muons of the Z{yields}{mu}{mu} event with simulated {tau}-leptons, Z{yields}{tau}{tau} events are modelled to high precision. For the replacement of the Z boson decay products a dedicated method based on tracks and calorimeter cells is developed. Without pile-up a discovery potential of 3{sigma} to 3.4{sigma} in the mass range 115 GeVsensitivity decreases to 1.7{sigma} to 1.9{sigma} mainly caused by the worse resolution of the reconstructed missing transverse energy.

  18. Sensitivity of the ATLAS experiment to discover the decay H→ ττ →ll+4ν of the Standard Model Higgs Boson produced in vector boson fusion

    International Nuclear Information System (INIS)

    Schmitz, Martin

    2011-01-01

    A study of the expected sensitivity of the ATLAS experiment to discover the Standard Model Higgs boson produced via vector boson fusion (VBF) and its decay to H→ ττ→ ll+4ν is presented. The study is based on simulated proton-proton collisions at a centre-of-mass energy of 14 TeV. For the first time the discovery potential is evaluated in the presence of additional proton-proton interactions (pile-up) to the process of interest in a complete and consistent way. Special emphasis is placed on the development of background estimation techniques to extract the main background processes Z→ττ and t anti t production using data. The t anti t background is estimated using a control sample selected with the VBF analysis cuts and the inverted b-jet veto. The dominant background process Z→ττ is estimated using Z→μμ events. Replacing the muons of the Z→μμ event with simulated τ-leptons, Z→ττ events are modelled to high precision. For the replacement of the Z boson decay products a dedicated method based on tracks and calorimeter cells is developed. Without pile-up a discovery potential of 3σ to 3.4σ in the mass range 115 GeV H -1 . In the presence of pile-up the signal sensitivity decreases to 1.7σ to 1.9σ mainly caused by the worse resolution of the reconstructed missing transverse energy.

  19. Experiment of solidifying photo sensitive polymer by using UV LED

    Science.gov (United States)

    Kang, Byoung Hun; Shin, Sung Yeol

    2008-11-01

    The development of Nano/Micro manufacturing technologies is growing rapidly and in the same manner, the investments in these areas are increasing. The applications of Nano/Micro technologies are spreading out to semiconductor production technology, biotechnology, environmental engineering, chemical engineering and aerospace. Especially, SLA is one of the most popular applications which is to manufacture 3D shaped microstructure by using UV laser and photo sensitive polymer. To make a high accuracy and precision shape of microstructures that are required from the diverse industrial fields, the information of interaction relationship between the photo resin and the light source is necessary for further research. Experiment of solidifying photo sensitive polymer by using UV LED is the topic of this paper and the purpose of this study is to find out what relationships do the reaction of the resin have in various wavelength, power of the light and time.

  20. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  1. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  2. HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment

    Science.gov (United States)

    Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham

    2013-01-01

    Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.

  3. Precipitates/Salts Model Sensitivity Calculation

    Energy Technology Data Exchange (ETDEWEB)

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  4. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  5. Sensitivity analyses of the peach bottom turbine trip 2 experiment

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2003-01-01

    In the light of the sustained development in computer technology, the possibilities for code calculations in predicting more realistic transient scenarios in nuclear power plants have been enlarged substantially. Therefore, it becomes feasible to perform 'Best-estimate' simulations through the incorporation of three-dimensional modeling of reactor core into system codes. This method is particularly suited for complex transients that involve strong feedback effects between thermal-hydraulics and kinetics as well as to transient involving local asymmetric effects. The Peach bottom turbine trip test is characterized by a prompt core power excursion followed by a self limiting power behavior. To emphasize and understand the feedback mechanisms involved during this transient, a series of sensitivity analyses were carried out. This should allow the characterization of discrepancies between measured and calculated trends and assess the impact of the thermal-hydraulic and kinetic response of the used models. On the whole, the data comparison revealed a close dependency of the power excursion with the core feedback mechanisms. Thus for a better best estimate simulation of the transient, both of the thermal-hydraulic and the kinetic models should be made more accurate. (author)

  6. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  7. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  8. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    The problem of derivation and calculation of sensitivity functions for all parameters of the mass balance reduced model of the COST benchmark activated sludge plant is formulated and solved. The sensitivity functions, equations and augmented sensitivity state space models are derived for the cases of ASM1 and UCT ...

  9. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  10. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  11. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  12. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  13. The 'model omnitron' proposed experiment

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-05-01

    The Model Omitron is a compact tokamak experiment which is designed by the Fusion Engineering Unit of ENEA and CITIF CONSORTIUM. The building of Model Omitron would allow for full testing of Omitron engineering, and partial testing of Omitron physics -at about 1/20 of the cost that has been estimated for the larger parent machine. In particular, due to the unusually large ohmic power densities (up to 100 times the nominal value in the Frascati FTU experiment), in Model Omitron the radial energy flux is reaching values comparable or higher than envisaged of the larger ignition experiments Omitron, Ignitor and Iter. Consequently, conditions are expected to occur at the plasma border in the scrape-off layer of Model Omitron, which are representative of the quoted larger experiments. Moreover, since all this will occur under ohmic heating alone, one will hopefully be able to derive an energy transport model fo the ohmic heating regime that is valid over a range of plasma parameters (in particular, of the temperature parameter) wider than it was possible before. In the Model Omitron experiment, finally - by reducing the plasma current and/or the toroidal field down to, say, 1/3 or 1/4 of the nominal values -additional topics can be tackled, such as: large safety-factor configurations (of interest for improving confinement), large aspect-ratio configurations (of interest for the investigation of advanced concepts in tokamaks), high beta (with RF heating -also of interest for the investigation of advanced concepts in tokamaks), long pulse discharges (of interest for demonstrating stationary conditions in the current profile)

  14. Complete Sensitivity/Uncertainty Analysis of LR-0 Reactor Experiments with MSRE FLiBe Salt and Perform Comparison with Molten Salt Cooled and Molten Salt Fueled Reactor Models

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nicholas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Powers, Jeffrey J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mueller, Don [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    In September 2016, reactor physics measurements were conducted at Research Centre Rez (RC Rez) using the FLiBe (2 7LiF + BeF2) salt from the Molten Salt Reactor Experiment (MSRE) in the LR-0 low power nuclear reactor. These experiments were intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems using FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL), in collaboration with RC Rez, performed sensitivity/uncertainty (S/U) analyses of these experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objectives of these analyses were (1) to identify potential sources of bias in fluoride salt-cooled and salt-fueled reactor simulations resulting from cross section uncertainties, and (2) to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a final report on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. In the future, these S/U analyses could be used to inform the design of additional FLiBe-based experiments using the salt from MSRE. The key finding of this work is that, for both solid and liquid fueled fluoride salt reactors, radiative capture in 7Li is the most significant contributor to potential bias in neutronics calculations within the FLiBe salt.

  15. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  16. Metals Are Important Contact Sensitizers: An Experience from Lithuania

    Directory of Open Access Journals (Sweden)

    Kotryna Linauskienė

    2017-01-01

    Full Text Available Background. Metals are very frequent sensitizers causing contact allergy and allergic contact dermatitis worldwide; up-to-date data based on patch test results has proved useful for the identification of a problem. Objectives. In this retrospective study prevalence of contact allergy to metals (nickel, chromium, palladium, gold, cobalt, and titanium in Lithuania is analysed. Patients/Methods. Clinical and patch test data of 546 patients patch tested in 2014–2016, in Vilnius University Hospital Santariskiu Klinikos, was analysed and compared with previously published data. Results. Almost third of tested patients (29.56% were sensitized to nickel. Younger women were more often sensitized to nickel than older ones (36% versus 22.8%, p=0.0011. Women were significantly more often sensitized to nickel than men (33% versus 6.1%, p<0.0001. Younger patients were more often sensitized to cobalt (11.6% versus 5.7%, p=0.0183. Sensitization to cobalt was related to sensitization to nickel (p<0.0001. Face dermatitis and oral discomfort were related to gold allergy (28% versus 6.9% dermatitis of other parts, p<0.0001. Older patients were patch test positive to gold(I sodium thiosulfate statistically significantly more often than younger ones (44.44% versus 21.21%, p=0.0281. Conclusions. Nickel, gold, cobalt, and chromium are leading metal sensitizers in Lithuania. Cobalt sensitization is often accompanied by sensitization to nickel. Sensitivity rate to palladium and nickel indicates possible cross-reactivity. No sensitization to titanium was found.

  17. Development and Sensitivity Analysis of a Fully Kinetic Model of Sequential Reductive Dechlorination in Groundwater

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup

    2011-01-01

    experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most...

  18. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  19. The Sensitivity of State Differential Game Vessel Traffic Model

    Directory of Open Access Journals (Sweden)

    Lisowski Józef

    2016-04-01

    Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.

  20. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  1. Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling

    National Research Council Canada - National Science Library

    Sakhanenko, Nikita A; Luger, George F

    2008-01-01

    We propose a context-sensitive probabilistic modeling system (COSMOS) that reasons about a complex, dynamic environment through a series of applications of smaller, knowledge-focused models representing contextually relevant information...

  2. Oral sensitization to food proteins: A Brown Norway rat model

    NARCIS (Netherlands)

    Knippels, L.M.J.; Penninks, A.H.; Spanhaak, S.; Houben, G.F.

    1998-01-01

    Background: Although several in vivo antigenicity assays using parenteral immunization are operational, no adequate enteral sensitization models are available to study food allergy and allergenicity of food proteins. Objective: This paper describes the development of an enteral model for food

  3. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  4. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  5. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  6. Coping with drought: the experience of water sensitive urban design ...

    African Journals Online (AJOL)

    This study investigated the extent of Water Sensitive Urban Design (WSUD) activities in the George Municipality in the Western Cape Province, South Africa, and its impact on water consumption. The WSUD approach aims to influence design and planning from the moment rainwater is captured in dams, to when it is treated, ...

  7. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  8. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  9. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  10. Regional climate model sensitivity to domain size

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)

    2009-05-15

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)

  11. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  12. Regional climate model sensitivity to domain size

    Science.gov (United States)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  13. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  14. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  15. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  16. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  17. Experimental issues in high-sensitivity charm experiments

    International Nuclear Information System (INIS)

    Appel, J.A.

    1994-07-01

    Progress in the exploration of charm physics at fixed target experiments has been prodigious over the last 15 years. The issue before the CHARM2000 Workshop is whether and how this progress can be continued beyond the next fixed target run. An equivalent of 10 8 fully reconstructed charm decays has been selected as a worthy goal. Underlying all this is the list of physics questions which can be answered by pursuing charm in this way. This paper reviews the experimental issues associated with making this next step. It draws heavily on the experience gathered over the period of rapid progress and, at the end, poses the questions of what is needed and what choices may need to be made

  18. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  19. Silicon position sensitive detectors for the Helios (NA 34) experiment

    Energy Technology Data Exchange (ETDEWEB)

    Engels, E Jr; Mani, S; Manns, T; Plants, D; Shepard, P F; Thompson, J A; Tosh, R; Chand, T; Shivpuri, R; Baker, W

    1987-01-15

    The design construction and testing of X-Y tracking modules for a silicon microstrip vertex detector for use in Fermilab experiment E706 is discussed. A successful adaptation of various technologies, essential for instrumenting this class of detectors at a university laboratory is described. Emphasis is placed on considerable cost reduction, design flexibiity and more rapid turnover with a view toward large detectors for the future.

  20. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  1. A context-sensitive trust model for online social networking

    CSIR Research Space (South Africa)

    Danny, MN

    2016-11-01

    Full Text Available of privacy attacks. In the quest to address this problem, this paper proposes a context-sensitive trust model. The proposed trust model was designed using fuzzy logic theory and implemented using MATLAB. Contrary to existing trust models, the context...

  2. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  3. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    Science.gov (United States)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  4. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  5. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  6. Experiments beyond the standard model

    International Nuclear Information System (INIS)

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references

  7. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  8. Sensitivity Evaluation of the Daily Thermal Predictions of the AGR-1 Experiment in the Advanced Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Grant Hawkes; James Sterbentz; John Maki

    2011-05-01

    A temperature sensitivity evaluation has been performed for the AGR-1 fuel experiment on an individual capsule. A series of cases were compared to a base case by varying different input parameters into the ABAQUS finite element thermal model. These input parameters were varied by ±10% to show the temperature sensitivity to each parameter. The most sensitive parameters are the outer control gap distance, heat rate in the fuel compacts, and neon gas fraction. Thermal conductivity of the compacts and graphite holder were in the middle of the list for sensitivity. The smallest effects were for the emissivities of the stainless steel, graphite, and thru tubes. Sensitivity calculations were also performed varying with fluence. These calculations showed a general temperature rise with an increase in fluence. This is a result of the thermal conductivity of the fuel compacts and graphite holder decreasing with fluence.

  9. Enhancing detection sensitivity of SST-1 Thomson scattering experiment

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhari, Vishnu; Patel, Kiran; Thomas, Jinto; Kumar, Ajai, E-mail: ajai@ipr.res.in

    2016-10-15

    Thomson Scattering System (TSS) is the main diagnostic to extract electron temperature and density of steady state superconducting (SST-1) tokamak plasma. Silicon avalanche photo diode is used with low noise and fast signal conditioning electronics (SCE) to detect incoming Thomson scattered laser photons. A stringent requirement for the measurement is to detect high speed and low level light signal (detection of 100 numbers of Thomson scattered photons for 50 ns pulse width at input of active area of detector) in the presence of wide band electro-magnetic interference (EMI) noise. The electronics and instruments for different sub-systems kept in laboratory contribute to the radiated and conductive noise in a complex manner to the experiment, which can degrade the resultant signal to noise ratio (SNR <1). In general a repeated trial method with flexible grounding scheme are used to improve system signal to noise ratio, which is time consuming and less efficient. In the present work a simple, robust, cost-effective instrumentation system is used for the measurement and monitoring with improved ground scheme and shielding method to minimize noise, isolating the internal sub-system generated noise and external interference which leads to an improved SNR.

  10. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  11. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  12. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  13. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2018-01-01

    to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...

  14. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: experiments and computer simulations

    International Nuclear Information System (INIS)

    McKeever, S.W.S.; Oklahoma State Univ., Stillwater, OK; Boetter-Jensen, L.; Agersnap Larsen, N.; Mejdahl, V.; Poolton, N.R.J.

    1996-01-01

    As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes believed to be occurring. The computer model used includes both shallow and deep ('hard-to-bleach') traps, OSL ('easy-to-bleach') traps, and radiative and non-radiative recombination centres. The model has previously been used successfully to account for sensitivity changes in quartz due to thermal annealing. The simulations are able to reproduce qualitatively the main features of the experimental results including sensitivity changes as a function of re-use, and their dependence upon bleaching time and laboratory dose. The sensitivity changes are believed to be the result of a combination of shallow trap and deep trap effects. (author)

  15. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: Experiments and computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Bøtter-Jensen, L.; Agersnap Larsen, N.

    1996-01-01

    believed to be occurring. The computer model used includes both shallow and deep ('hard-to-bleach') traps, OSL ('easy-to-bleach') traps, and radiative and non-radiative recombination centres. The model has previously been used successfully to account for sensitivity changes in quartz due to thermal......As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes...... annealing. The simulations are able to reproduce qualitatively the main features of the experimental results including sensitivity changes as a function of reuse, and their dependence upon bleaching time and laboratory dose. The sensitivity changes are believed to be the result of a combination of shallow...

  16. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  17. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  18. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... Sensitivity study of reduced models of the activated sludge process, for the purposes of parameter estimation and process optimisation: Benchmark process with ASM1 and UCT reduced biological models. S du Plessis and R Tzoneva*. Department of Electrical Engineering, Cape Peninsula University of ...

  19. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  20. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  1. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  2. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  3. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  4. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    1994-01-01

    The work done on this project focused on two LAMPF experiments. The MEGA experiment is a high-sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → eγ]/[μ eν μ ν e ] ∼ 10 -13 , will be over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → eν μ ν e to test the predictions V-A theory of weak interactions. In this experiment the uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value. The detectors are operational, and data taking has begun

  5. Modeling the Experience of Emotion

    OpenAIRE

    Broekens, Joost

    2009-01-01

    Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent...

  6. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  7. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  8. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Science.gov (United States)

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.

  9. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Directory of Open Access Journals (Sweden)

    Sarah A Gerson

    Full Text Available In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early music perception and cognition.

  10. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    Science.gov (United States)

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  11. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  12. Climate and climate change sensitivity to model configuration in the Canadian RCM over North America

    Energy Technology Data Exchange (ETDEWEB)

    De Elia, Ramon [Ouranos Consortium on Regional Climate and Adaptation to Climate Change, Montreal (Canada); Centre ESCER, Univ. du Quebec a Montreal (Canada); Cote, Helene [Ouranos Consortium on Regional Climate and Adaptation to Climate Change, Montreal (Canada)

    2010-06-15

    Climate simulations performed with Regional Climate Models (RCMs) have been found to show sensitivity to parameter settings. The origin, consequences and interpretations of this sensitivity are varied, but it is generally accepted that sensitivity studies are very important for a better understanding and a more cautious manipulation of RCM results. In this work we present sensitivity experiments performed on the simulated climate produced by the Canadian Regional Climate Model (CRCM). In addition to climate sensitivity to parameter variation, we analyse the impact of the sensitivity on the climate change signal simulated by the CRCM. These studies are performed on 30-year long simulated present and future seasonal climates, and we have analysed the effect of seven kinds of configuration modifications: CRCM initial conditions, lateral boundary condition (LBC), nesting update interval, driving Global Climate Model (GCM), driving GCM member, large-scale spectral nudging, CRCM version, and domain size. Results show that large changes in both the driving model and the CRCM physics seem to be the main sources of sensitivity for the simulated climate and the climate change. Their effects dominate those of configuration issues, such as the use or not of large-scale nudging, domain size, or LBC update interval. Results suggest that in most cases, differences between simulated climates for different CRCM configurations are not transferred to the estimated climate change signal: in general, these tend to cancel each other out. (orig.)

  13. Active drumming experience increases infants' sensitivity to audiovisual synchrony during observed drumming actions

    NARCIS (Netherlands)

    Gerson, S.A.; Schiavio, A.A.R.; Timmers, R.; Hunnius, S.

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this

  14. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  15. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  16. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  17. Sensitivity of wildlife habitat models to uncertainties in GIS data

    Science.gov (United States)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  18. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.

    1992-01-01

    The work done on this project was focussed mainly on LAMPF experiment E969 known as the MEGA experiment, a high sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ→eγ]/[μ→e ν μ ν e ] ∼10 -13 is over two orders of magnitude better than previously reported values. The work done on MEGA during this period was divided between that done at Valparaiso University and that done at LAMPF. In addition, some contributions were made to a proposal to the LAMPF PAC to perform a precision measurement of the Michel ρ parameter, described below

  19. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.

    1993-01-01

    The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay μ → e γ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → e γ]/[μ → ev μ v e ] ∼ 10 -13 , is over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → ev μ v e to test the V-A theory of weak interactions. The uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value

  20. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    Directory of Open Access Journals (Sweden)

    L. A. Lee

    2011-12-01

    Full Text Available Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process

  1. Is Convection Sensitive to Model Vertical Resolution and Why?

    Science.gov (United States)

    Xie, S.; Lin, W.; Zhang, G. J.

    2017-12-01

    Model sensitivity to horizontal resolutions has been studied extensively, whereas model sensitivity to vertical resolution is much less explored. In this study, we use the US Department of Energy (DOE)'s Accelerated Climate Modeling for Energy (ACME) atmosphere model to examine the sensitivity of clouds and precipitation to the increase of vertical resolution of the model. We attempt to understand what results in the behavior change (if any) of convective processes represented by the unified shallow and turbulent scheme named CLUBB (Cloud Layers Unified by Binormals) and the Zhang-McFarlane deep convection scheme in ACME. A short-term hindcast approach is used to isolate parameterization issues from the large-scale circulation. The analysis emphasizes on how the change of vertical resolution could affect precipitation partitioning between convective- and grid-scale as well as the vertical profiles of convection-related quantities such as temperature, humidity, clouds, convective heating and drying, and entrainment and detrainment. The goal is to provide physical insight into potential issues with model convective processes associated with the increase of model vertical resolution. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  3. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  4. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  5. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  6. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  7. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  8. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    2008-02-01

    Full Text Available Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity.Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity.When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse.Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  9. Stress Sensitivity and Psychotic Experiences in 39 Low- and Middle-Income Countries.

    Science.gov (United States)

    DeVylder, Jordan E; Koyanagi, Ai; Unick, Jay; Oh, Hans; Nam, Boyoung; Stickley, Andrew

    2016-11-01

    Stress has a central role in most theories of psychosis etiology, but the relation between stress and psychosis has rarely been examined in large population-level data sets, particularly in low- and middle-income countries. We used data from 39 countries in the World Health Survey (n = 176 934) to test the hypothesis that stress sensitivity would be associated with psychotic experiences, using logistic regression analyses. Respondents in low-income countries reported higher stress sensitivity (P countries. Greater stress sensitivity was associated with increased odds for psychotic experiences, even when adjusted for co-occurring anxiety and depressive symptoms: adjusted odds ratio (95% CI) = 1.17 (1.15-1.19) per unit increase in stress sensitivity (range 2-10). This association was consistent and significant across nearly every country studied, and translated into a difference in psychotic experience prevalence ranging from 6.4% among those with the lowest levels of stress sensitivity up to 22.2% among those with the highest levels. These findings highlight the generalizability of the association between psychosis and stress sensitivity in the largest and most globally representative community-level sample to date, and support the targeting of stress sensitivity as a potential component of individual- and population-level interventions for psychosis. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  10. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya

    2017-10-03

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  11. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya; Kalligiannaki, Evangelia; Tempone, Raul

    2017-01-01

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  12. Global sensitivity analysis of GEOS-Chem modeled ozone and hydrogen oxides during the INTEX campaigns

    Directory of Open Access Journals (Sweden)

    K. E. Christian

    2018-02-01

    Full Text Available Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3, hydroxyl radical (OH, and hydroperoxyl radical (HO2 mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.

  13. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    Science.gov (United States)

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  14. Sensitivity of corneal biomechanical and optical behavior to material parameters using design of experiments method.

    Science.gov (United States)

    Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung

    2018-02-01

    The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.

  15. Stimulus Sensitivity of a Spiking Neural Network Model

    Science.gov (United States)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  16. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  17. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  18. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  19. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  20. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  1. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, T. (Yale Univ., New Haven, CT (United States))

    1994-06-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.

  2. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Sensitivity of the SHiP experiment to a light scalar particle mixing with the Higgs

    CERN Document Server

    Lanfranchi, Gaia

    2017-01-01

    This conceptual study shows the ultimate sensitivity of the SHiP experiment for the search of a light scalar particle mixing with the Higgs for a dataset corresponding to 5-years of SHiP operation at a nominal intensity of 4 1013 protons on target per second. The sensitivity as a function of the length of the vessel and of its distance from the target as well as a function of the background contamination is also studied.

  4. The PIENU experiment at TRIUMF : a sensitive probe for new physics

    International Nuclear Information System (INIS)

    Malbrunot, Chloe; Bryman, D A; Hurst, C; Aguilar-Arevalo, A A; Aoki, M; Ito, N; Kuno, Y; Blecher, M; Britton, D I; Chen, S; Ding, M; Comfort, J; Doornbos, J; Doria, L; Gumplinger, P; Kurchaninov, L; Hussein, A; Igarashi, Y; Kettell, S; Littenberg, L

    2011-01-01

    Study of rare decays is an important approach for exploring physics beyond the Standard Model (SM). The branching ratio of the helicity suppressed pion decays, R = Γ(π + → e + ν e +π + → e + ν e γ/π + → μ + ν μ + π + → μ + ν μ γ, is one of the most accurately calculated decay process involving hadrons and has so far provided the most stringent test of the hypothesis of electron-muon universality in weak interactions. The branching ratio has been calculated in the SM to better than 0.01% accuracy to be R SM = 1.2353(1) x 10. The PIENU experiment at TRIUMF, which started taking physics data in September 2009, aims to reach an accuracy five times better than the previous experiments, so as to confront the theoretical calculation at the level of ±0.1%. If a deviation from the R SM is found, 'new physics' beyond the SM, at potentially very high mass scales (up to 1000 TeV), could be revealed. Alternatively, sensitive constraints on hypotheses can be obtained for interactions involving pseudoscalar or scalar interactions. So far, 4 million π + → e + ν e events have been accumulated by PIENU. This paper will outline the physics motivations, describe the apparatus and techniques designed to achieve high precision and present the latest results.

  5. Modeling of microgravity combustion experiments

    Science.gov (United States)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  6. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  7. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  8. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  9. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  10. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  11. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...

  12. The sensitivity of catchment runoff models to rainfall data at different spatial scales

    Directory of Open Access Journals (Sweden)

    V. A. Bell

    2000-01-01

    Full Text Available The sensitivity of catchment runoff models to rainfall is investigated at a variety of spatial scales using data from a dense raingauge network and weather radar. These data form part of the HYREX (HYdrological Radar EXperiment dataset. They encompass records from 49 raingauges over the 135 km2 Brue catchment in south-west England together with 2 and 5 km grid-square radar data. Separate rainfall time-series for the radar and raingauge data are constructed on 2, 5 and 10 km grids, and as catchment average values, at a 15 minute time-step. The sensitivity of the catchment runoff models to these grid scales of input data is evaluated on selected convective and stratiform rainfall events. Each rainfall time-series is used to produce an ensemble of modelled hydrographs in order to investigate this sensitivity. The distributed model is shown to be sensitive to the locations of the raingauges within the catchment and hence to the spatial variability of rainfall over the catchment. Runoff sensitivity is strongest during convective rainfall when a broader spread of modelled hydrographs results, with twice the variability of that arising from stratiform rain. Sensitivity to rainfall data and model resolution is explored and, surprisingly, best performance is obtained using a lower resolution of rainfall data and model. Results from the distributed catchment model, the Simple Grid Model, are compared with those obtained from a lumped model, the PDM. Performance from the distributed model is found to be only marginally better during stratiform rain (R2 of 0.922 compared to 0.911 but significantly better during convective rain (R2 of 0.953 compared to 0.909. The improved performance from the distributed model can, in part, be accredited to the excellence of the dense raingauge network which would not be the norm for operational flood warning systems. In the final part of the paper, the effect of rainfall resolution on the performance of the 2 km distributed

  13. Modelling survival: exposure pattern, species sensitivity and uncertainty.

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G

    2016-07-06

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  14. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  15. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  16. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  17. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  18. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    Science.gov (United States)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  19. Uncertainty and sensitivity analysis of biokinetic models for radiopharmaceuticals used in nuclear medicine

    International Nuclear Information System (INIS)

    Li, W. B.; Hoeschen, C.

    2010-01-01

    Mathematical models for kinetics of radiopharmaceuticals in humans were developed and are used to estimate the radiation absorbed dose for patients in nuclear medicine by the International Commission on Radiological Protection and the Medical Internal Radiation Dose (MIRD) Committee. However, due to the fact that the residence times used were derived from different subjects, partially even with different ethnic backgrounds, a large variation in the model parameters propagates to a high uncertainty of the dose estimation. In this work, a method was developed for analysing the uncertainty and sensitivity of biokinetic models that are used to calculate the residence times. The biokinetic model of 18 F-FDG (FDG) developed by the MIRD Committee was analysed by this developed method. The sources of uncertainty of all model parameters were evaluated based on the experiments. The Latin hypercube sampling technique was used to sample the parameters for model input. Kinetic modelling of FDG in humans was performed. Sensitivity of model parameters was indicated by combining the model input and output, using regression and partial correlation analysis. The transfer rate parameter of plasma to other tissue fast is the parameter with the greatest influence on the residence time of plasma. Optimisation of biokinetic data acquisition in the clinical practice by exploitation of the sensitivity of model parameters obtained in this study is discussed. (authors)

  20. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  1. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    Science.gov (United States)

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  2. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  3. Modeling of laser-driven hydrodynamics experiments

    Science.gov (United States)

    di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul

    2017-10-01

    Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.

  4. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  5. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  6. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  7. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  8. Modeling experiments using quantum and Kolmogorov probability

    International Nuclear Information System (INIS)

    Hess, Karl

    2008-01-01

    Criteria are presented that permit a straightforward partition of experiments into sets that can be modeled using both quantum probability and the classical probability framework of Kolmogorov. These new criteria concentrate on the operational aspects of the experiments and lead beyond the commonly appreciated partition by relating experiments to commuting and non-commuting quantum operators as well as non-entangled and entangled wavefunctions. In other words the space of experiments that can be understood using classical probability is larger than usually assumed. This knowledge provides advantages for areas such as nanoscience and engineering or quantum computation.

  9. Azimuthally sensitive Hanbury Brown-Twiss interferometry measured with the ALICE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Gramling, Johanna Lena

    2011-07-01

    Bose-Einstein correlations of identical pions emitted in high-energy particle collisions provide information about the size of the source region in space-time. If analyzed via HBT Interferometry in several directions with respect to the reaction plane, the shape of the source can be extracted. Hence, HBT Interferometry provides an excellent tool to probe the characteristics of the quark-gluon plasma possibly created in high-energy heavy-ion collisions. This thesis introduces the main theoretical concepts of particle physics, the quark gluon plasma and the technique of HBT interferometry. The ALICE experiment at the CERN Large Hadron Collider (LHC) is explained and the first azimuthallyintegrated results measured in Pb-Pb collisions at √(s{sub NN})=2.76 TeV with ALICE are presented. A detailed two-track resolution study leading to a global pair cut for HBT analyses has been performed, and a framework for the event plane determination has been developed. The results from azimuthally sensitive HBT interferometry are compared to theoretical models and previous measurements at lower energies. Oscillations of the transverse radii in dependence on the pair emission angle are observed, consistent with a source that is extended out-of-plane.

  10. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  11. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe...

  12. Healthy volunteers can be phenotyped using cutaneous sensitization pain models

    DEFF Research Database (Denmark)

    Werner, Mads U; Petersen, Karin; Rowbotham, Michael C

    2013-01-01

    Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repe...... repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models.......Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following...

  13. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  14. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  15. Extracting Models in Single Molecule Experiments

    Science.gov (United States)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  16. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  17. Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions

    Directory of Open Access Journals (Sweden)

    Sebastian Bab

    2012-04-01

    Full Text Available The concept of dynamic coalitions (also virtual organizations describes the temporary interconnection of autonomous agents, who share information or resources in order to achieve a common goal. Through modern technologies these coalitions may form across company, organization and system borders. Therefor questions of access control and security are of vital significance for the architectures supporting these coalitions. In this paper, we present our first steps to reach a formal framework for modeling and verifying the design of privacy-sensitive dynamic coalition infrastructures and their processes. In order to do so we extend existing dynamic coalition modeling approaches with an access-control-concept, which manages access to information through policies. Furthermore we regard the processes underlying these coalitions and present first works in formalizing these processes. As a result of the present paper we illustrate the usefulness of the Abstract State Machine (ASM method for this task. We demonstrate a formal treatment of privacy-sensitive dynamic coalitions by two example ASMs which model certain access control situations. A logical consideration of these ASMs can lead to a better understanding and a verification of the ASMs according to the aspired specification.

  18. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  19. Comparison between the Findings from the TROI Experiments and the Sensitivity Studies by Using the TEXAS-V Code

    International Nuclear Information System (INIS)

    Park, I. K.; Kim, J. H.; Hong, S. W.; Min, B. T.; Hong, S. H.; Song, J. H.; Kim, H. D.

    2006-01-01

    Since a steam explosion may breach the integrity of a reactor vessel and containment, it is one of the most important severe accident issues. So, a lot of experimental and analytical researches on steam explosions have been performed. Although many findings from the steam explosion researches have been obtained, there still exist unsolved issues such as the explosivity of the real core material(corium) and the conversion ratio from the thermal energy to the mechanical energy. TROI experiments were carried out to provide the experimental data for these issues. The TROI experiments were performed with a prototypic material such as ZrO 2 melt and a mixture of ZrO 2 and UO 2 melt (corium). Several steam explosion codes including TEXAS-V had been developed by considering the findings in the past steam explosion experiments. However, some unique findings on steam explosions have been obtained from a series of TROI experiments. These findings should be considered in the application to a reactor safety analysis by using a computational code. In this paper, several findings from TROI experiments are discussed and the sensitivity studies on the TROI experimental parameters were conducted by using TEXAS-V code and TROI-13 test. The comparison between the TROI experimental findings and the results of the sensitivity study might allow us to know which parameter is important and which model is uncertain for steam explosions

  20. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  1. Performance of high-resolution position-sensitive detectors developed for storage-ring decay experiments

    International Nuclear Information System (INIS)

    Yamaguchi, T.; Suzaki, F.; Izumikawa, T.; Miyazawa, S.; Morimoto, K.; Suzuki, T.; Tokanai, F.; Furuki, H.; Ichihashi, N.; Ichikawa, C.; Kitagawa, A.; Kuboki, T.; Momota, S.; Nagae, D.; Nagashima, M.; Nakamura, Y.; Nishikiori, R.; Niwa, T.; Ohtsubo, T.; Ozawa, A.

    2013-01-01

    Highlights: • Position-sensitive detectors were developed for storage-ring decay spectroscopy. • Fiber scintillation and silicon strip detectors were tested with heavy ion beams. • A new fiber scintillation detector showed an excellent position resolution. • Position and energy detection by silicon strip detectors enable full identification. -- Abstract: As next generation spectroscopic tools, heavy-ion cooler storage rings will be a unique application of highly charged RI beam experiments. Decay spectroscopy of highly charged rare isotopes provides us important information relevant to the stellar conditions, such as for the s- and r-process nucleosynthesis. In-ring decay products of highly charged RI will be momentum-analyzed and reach a position-sensitive detector set-up located outside of the storage orbit. To realize such in-ring decay experiments, we have developed and tested two types of high-resolution position-sensitive detectors: silicon strips and scintillating fibers. The beam test experiments resulted in excellent position resolutions for both detectors, which will be available for future storage-ring experiments

  2. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  3. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  4. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  5. Models for patients' recruitment in clinical trials and sensitivity analysis.

    Science.gov (United States)

    Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas

    2012-07-20

    Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.

  6. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  7. [A high sensitivity search for mu gamma: The mega experiment at LAMPF

    International Nuclear Information System (INIS)

    1990-01-01

    During the past 12 month period the Valparaiso University group has been active on LAMPF experiment 969, known as the MEGA experiment. This experiment is a search for the decay μ -> e γ, a decay which would violate lepton number conservation and which is strictly forbidden by the standard model for electroweak interactions. Previous searches for this decay mode have set limit the present day limit of 4.9 x 10 -11 . The MEGA experiment is designed to test the standard model predictions to one part in 10 +13

  8. Model Forecast Skill and Sensitivity to Initial Conditions in the Seasonal Sea Ice Outlook

    Science.gov (United States)

    Blanchard-Wrigglesworth, E.; Cullather, R. I.; Wang, W.; Zhang, J.; Bitz, C. M.

    2015-01-01

    We explore the skill of predictions of September Arctic sea ice extent from dynamical models participating in the Sea Ice Outlook (SIO). Forecasts submitted in August, at roughly 2 month lead times, are skillful. However, skill is lower in forecasts submitted to SIO, which began in 2008, than in hindcasts (retrospective forecasts) of the last few decades. The multimodel mean SIO predictions offer slightly higher skill than the single-model SIO predictions, but neither beats a damped persistence forecast at longer than 2 month lead times. The models are largely unsuccessful at predicting each other, indicating a large difference in model physics and/or initial conditions. Motivated by this, we perform an initial condition sensitivity experiment with four SIO models, applying a fixed -1 m perturbation to the initial sea ice thickness. The significant range of the response among the models suggests that different model physics make a significant contribution to forecast uncertainty.

  9. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent

    2010-01-01

    We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...... is bootstrapped with a grasp density formed from visual cues. We show that the robot effectively applies its experience by downweighting poor grasp solutions, which results in increased success rates at subsequent learning cycles. We also present success rates in a practical scenario where a robot needs...

  10. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    Science.gov (United States)

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow

  11. Particle transport model sensitivity on wave-induced processes

    Science.gov (United States)

    Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna

    2017-04-01

    Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.

  12. About the use of rank transformation in sensitivity analysis of model output

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Sobol', Ilya M

    1995-01-01

    Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis

  13. Uncertainty and Sensitivity Analysis of Filtration Models for Non-Fickian transport and Hyperexponential deposition

    DEFF Research Database (Denmark)

    Yuan, Hao; Sin, Gürkan

    2011-01-01

    Uncertainty and sensitivity analyses are carried out to investigate the predictive accuracy of the filtration models for describing non-Fickian transport and hyperexponential deposition. Five different modeling approaches, involving the elliptic equation with different types of distributed...... filtration coefficients and the CTRW equation expressed in Laplace space, are selected to simulate eight experiments. These experiments involve both porous media and colloid-medium interactions of different heterogeneity degrees. The uncertainty of elliptic equation predictions with distributed filtration...... coefficients is larger than that with a single filtration coefficient. The uncertainties of model predictions from the elliptic equation and CTRW equation in Laplace space are minimal for solute transport. Higher uncertainties of parameter estimation and model outputs are observed in the cases with the porous...

  14. Uncertainty and sensitivity analysis of environmental transport models

    International Nuclear Information System (INIS)

    Margulies, T.S.; Lancaster, L.E.

    1985-01-01

    An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables

  15. Sensitivity of tropospheric heating rates to aerosols: A modeling study

    International Nuclear Information System (INIS)

    Hanna, A.F.; Shankar, U.; Mathur, R.

    1994-01-01

    The effect of aerosols on the radiation balance is critical to the energetics of the atmosphere. Because of the relatively long residence of specific types of aerosols in the atmosphere and their complex thermal and chemical interactions, understanding their behavior is crucial for understanding global climate change. The authors used the Regional Particulate Model (RPM) to simulate aerosols in the eastern United States in order to identify the aerosol characteristics of specific rural and urban areas these characteristics include size, concentration, and vertical profile. A radiative transfer model based on an improved δ-Eddington approximation with 26 spectral intervals spanning the solar spectrum was then used to analyze the tropospheric heating rates associated with these different aerosol distributions. The authors compared heating rates forced by differences in surface albedo associated with different land-use characteristics, and found that tropospheric heating and surface cooling are sensitive to surface properties such as albedo

  16. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  17. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  18. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  19. Sensitivity and uncertainty analyses of the HCLL mock-up experiment

    International Nuclear Information System (INIS)

    Leichtle, D.; Fischer, U.; Kodeli, I.; Perel, R.L.; Klix, A.; Batistoni, P.; Villari, R.

    2010-01-01

    Within the European Fusion Technology Programme dedicated computational methods, tools and data have been developed and validated for sensitivity and uncertainty analyses of fusion neutronics experiments. The present paper is devoted to this kind of analyses on the recent neutronics experiment on a mock-up of the Helium-Cooled Lithium Lead Test Blanket Module for ITER at the Frascati neutron generator. They comprise both probabilistic and deterministic methodologies for the assessment of uncertainties of nuclear responses due to nuclear data uncertainties and their sensitivities to the involved reaction cross-section data. We have used MCNP and MCSEN codes in the Monte Carlo approach and DORT and SUSD3D in the deterministic approach for transport and sensitivity calculations, respectively. In both cases JEFF-3.1 and FENDL-2.1 libraries for the transport data and mainly ENDF/B-VI.8 and SCALE6.0 libraries for the relevant covariance data have been used. With a few exceptions, the two different methodological approaches were shown to provide consistent results. A total nuclear data related uncertainty in the range of 1-2% (1σ confidence level) was assessed for the tritium production in the HCLL mock-up experiment.

  20. Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods

    Directory of Open Access Journals (Sweden)

    Lee Sael

    2013-10-01

    Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.

  1. Computational modeling and sensitivity in uniform DT burn

    International Nuclear Information System (INIS)

    Hansen, Jon; Hryniw, Natalia; Kesler, Leigh A.; Li, Frank; Vold, Erik

    2010-01-01

    Understanding deuterium-tritium (DT) fusion is essential to achieving ignition in inertial confinement fusion. A burning DT plasma in a three temperature (3T) approximation and uniform in space is modeled as a system of five non-linear coupled ODEs. Special focus is given to the effects of Compton coupling, Planck opacity, and electron-ion coupling terms. Semi-implicit differencing is used to solve the system of equations. Time step size is varied to examine the stability and convergence of each solution. Data from NDI, SESAME, and TOPS databases is extracted to create analytic fits for the reaction rate parameter, the Planck opacity, and the coupling frequencies of the plasma temperatures. The impact of different high order fits to NDI date (the reaction rate parameter), and using TOPS versus SESAME opacity data is explored, and the sensitivity to several physics parameters in the coupling terms are also examined. The base model recovers the accepted 3T results for the temperature and burn histories. The Compton coupling is found to have a significant impact on the results. Varying a coefficient of this term shows that the model results can give reasonably good agreement with the peak temperatures reported in multi-group results as well as the accepted 3T results. The base model assumes a molar density of 1 mol/cm 3 , as well as a 5 keV intial temperature for all three temperatures. Different intial conditions are explored as well. Intial temperatures are set to 1 and 3 keV, the ratio of D to T is varied (2 and 3 as opposed to 1 in the base model), and densities are set to 10 mol/cm 3 and 100 mol/cm 3 . Again varying the Compton coefficient, the ion temperature results in the higher density case are in reasonable agreement with a recently published kinetic model.

  2. Computational modeling and sensitivity in uniform DT burn

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jon [Los Alamos National Laboratory; Hryniw, Natalia [Los Alamos National Laboratory; Kesler, Leigh A [Los Alamos National Laboratory; Li, Frank [Los Alamos National Laboratory; Vold, Erik [Los Alamos National Laboratory

    2010-01-01

    Understanding deuterium-tritium (DT) fusion is essential to achieving ignition in inertial confinement fusion. A burning DT plasma in a three temperature (3T) approximation and uniform in space is modeled as a system of five non-linear coupled ODEs. Special focus is given to the effects of Compton coupling, Planck opacity, and electron-ion coupling terms. Semi-implicit differencing is used to solve the system of equations. Time step size is varied to examine the stability and convergence of each solution. Data from NDI, SESAME, and TOPS databases is extracted to create analytic fits for the reaction rate parameter, the Planck opacity, and the coupling frequencies of the plasma temperatures. The impact of different high order fits to NDI date (the reaction rate parameter), and using TOPS versus SESAME opacity data is explored, and the sensitivity to several physics parameters in the coupling terms are also examined. The base model recovers the accepted 3T results for the temperature and burn histories. The Compton coupling is found to have a significant impact on the results. Varying a coefficient of this term shows that the model results can give reasonably good agreement with the peak temperatures reported in multi-group results as well as the accepted 3T results. The base model assumes a molar density of 1 mol/cm{sup 3}, as well as a 5 keV intial temperature for all three temperatures. Different intial conditions are explored as well. Intial temperatures are set to 1 and 3 keV, the ratio of D to T is varied (2 and 3 as opposed to 1 in the base model), and densities are set to 10 mol/cm{sup 3} and 100 mol/cm{sup 3}. Again varying the Compton coefficient, the ion temperature results in the higher density case are in reasonable agreement with a recently published kinetic model.

  3. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  4. An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.

    Science.gov (United States)

    Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C

    2013-08-01

    To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.

  5. Laryngeal sensitivity evaluation and dysphagia: Hospital Sírio-Libanês experience

    Directory of Open Access Journals (Sweden)

    Orlando Parise Junior

    Full Text Available CONTEXT: Laryngeal sensitivity is important in the coordination of swallowing coordination and avoidance of aspiration. OBJECTIVE: To briefly review the physiology of swallowing and report on our experience with laryngeal sensitivity evaluation among patients presenting dysphagia. TYPE OF STUDY: Prospective. SETTING: Endoscopy Department, Hospital Sírio-Libanês. METHODS: Clinical data, endoscopic findings from the larynx and the laryngeal sensitivity, as assessed via the Flexible Endoscopic Evaluation of Swallowing with Sensory Testing (FEESST protocol (using the Pentax AP4000 system, were prospectively studied. The chi-squared and Student t tests were used to compare differences, which were considered significant if p < or = 0.05. RESULTS: The study included 111 patients. A direct association was observed for hyperplasia and hyperemia of the posterior commissure region in relation to globus (p = 0.01 and regurgitation (p = 0.04. Hyperemia of the posterior commissure region had a direct association with sialorrhea (p = 0.03 and an inverse association with xerostomia (p = 0.03. There was a direct association between severe laryngeal sensitivity deficit and previous radiotherapy of the head and neck (p = 0.001. DISCUSSION: These data emphasize the association between proximal gastroesophageal reflux and chronic posterior laryngitis, and suggest that decreased laryngeal sensitivity could be a side effect of radiotherapy. CONCLUSIONS: Even considering that these results are preliminary, the endoscopic findings from laryngoscopy seem to be important in the diagnosis of proximal gastroesophageal reflux. Study of laryngeal sensitivity may have the potential for improving the knowledge and clinical management of dysphagia.

  6. Cooling tower plume - model and experiment

    Science.gov (United States)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  7. Cooling tower plume - model and experiment

    Directory of Open Access Journals (Sweden)

    Cizek Jan

    2017-01-01

    Full Text Available The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  8. A position sensitive silicon detector for AEgIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy)

    CERN Multimedia

    Gligorova, A

    2014-01-01

    The AEḡIS experiment (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is located at the Antiproton Decelerator (AD) at CERN and studies antimatter. The main goal of the AEḡIS experiment is to carry out the first measurement of the gravitational acceleration for antimatter in Earth’s gravitational field to a 1% relative precision. Such a measurement would test the Weak Equivalence Principle (WEP) of Einstein’s General Relativity. The gravitational acceleration for antihydrogen will be determined using a set of gravity measurement gratings (Moiré deflectometer) and a position sensitive detector. The vertical shift due to gravity of the falling antihydrogen atoms will be detected with a silicon strip detector, where the annihilation of antihydrogen will take place. This poster presents part of the development process of this detector.

  9. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu

    2016-06-13

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  10. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu; Stenchikov, Georgiy L.

    2016-01-01

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  11. A shorter and more specific oral sensitization-based experimental model of food allergy in mice.

    Science.gov (United States)

    Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica

    2012-07-31

    Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Sensitivity of modeled ozone concentrations to uncertainties in biogenic emissions

    International Nuclear Information System (INIS)

    Roselle, S.J.

    1992-06-01

    The study examines the sensitivity of regional ozone (O3) modeling to uncertainties in biogenic emissions estimates. The United States Environmental Protection Agency's (EPA) Regional Oxidant Model (ROM) was used to simulate the photochemistry of the northeastern United States for the period July 2-17, 1988. An operational model evaluation showed that ROM had a tendency to underpredict O3 when observed concentrations were above 70-80 ppb and to overpredict O3 when observed values were below this level. On average, the model underpredicted daily maximum O3 by 14 ppb. Spatial patterns of O3, however, were reproduced favorably by the model. Several simulations were performed to analyze the effects of uncertainties in biogenic emissions on predicted O3 and to study the effectiveness of two strategies of controlling anthropogenic emissions for reducing high O3 concentrations. Biogenic hydrocarbon emissions were adjusted by a factor of 3 to account for the existing range of uncertainty in these emissions. The impact of biogenic emission uncertainties on O3 predictions depended upon the availability of NOx. In some extremely NOx-limited areas, increasing the amount of biogenic emissions decreased O3 concentrations. Two control strategies were compared in the simulations: (1) reduced anthropogenic hydrocarbon emissions, and (2) reduced anthropogenic hydrocarbon and NOx emissions. The simulations showed that hydrocarbon emission controls were more beneficial to the New York City area, but that combined NOx and hydrocarbon controls were more beneficial to other areas of the Northeast. Hydrocarbon controls were more effective as biogenic hydrocarbon emissions were reduced, whereas combined NOx and hydrocarbon controls were more effective as biogenic hydrocarbon emissions were increased

  13. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  14. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  15. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    Science.gov (United States)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  16. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  17. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  18. Thermal experiments in the ADS target model

    International Nuclear Information System (INIS)

    Efanov, A.D.; Orlov, Yu.I.; Sorokin, A.P.; Ivanov, E.F.; Bogoslovskaya, G.P.; Li, N.

    2002-01-01

    Experiments on the development of the target heat model project and method of investigation into heat exchange in target were conducted with the aim of analysis of thermomechanical and strength characteristics of device; experimental data on the temperature distribution in coolant and membrane were obtained. Obtained data demonstrate that the temperature heterogeneity of membrane and coolant are connected with the temperature distribution variability near the membrane. Peculiarities of the experiment are noted: maximal temperature of oscillations at high point of the membrane, and power bearing temperature oscillations in the range 0 - 1 Hz [ru

  19. Performance Modeling of Mimosa pudica Extract as a Sensitizer for Solar Energy Conversion

    Directory of Open Access Journals (Sweden)

    M. B. Shitta

    2016-01-01

    Full Text Available An organic material is proposed as a sustainable sensitizer and a replacement for the synthetic sensitizer in a dye-sensitized solar cell technology. Using the liquid extract from the leaf of a plant called Mimosa pudica (M. pudica as a sensitizer, the performance characteristics of the extract of M. pudica are investigated. The photo-anode of each of the solar cell sample is passivated with a self-assembly monolayer (SAM from a set of four materials, including alumina, formic acid, gelatine, and oxidized starch. Three sets of five samples of an M. pudica–based solar cell are produced, with the fifth sample used as the control experiment. Each of the solar cell samples has an active area of 0.3848cm2. A two-dimensional finite volume method (FVM is used to model the transport of ions within the monolayer of the solar cell. The performance of the experimentally fabricated solar cells compares qualitatively with the ones obtained from the literature and the simulated solar cells. The highest efficiency of 3% is obtained from the use of the extract as a sensitizer. It is anticipated that the comparison of the performance characteristics with further research on the concentration of M. pudica extract will enhance the development of a reliable and competitive organic solar cell. It is also recommended that further research should be carried out on the concentration of the extract and electrolyte used in this study for a possible improved performance of the cell.

  20. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  1. Modeling the Nab Experiment Electronics in SPICE

    Science.gov (United States)

    Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.

  2. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  3. GCR Environmental Models I: Sensitivity Analysis for GCR Environments

    Science.gov (United States)

    Slaba, Tony C.; Blattnig, Steve R.

    2014-01-01

    Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.

  4. VUV-sensitive silicon-photomultipliers for the nEXO-experiment

    Energy Technology Data Exchange (ETDEWEB)

    Wrede, Gerrit; Bayerlein, Reimund; Hufschmidt, Patrick; Jamil, Ako; Schneider, Judith; Wagenpfeil, Michael; Ziegler, Tobias; Hoessl, Juergen; Anton, Gisela; Michel, Thilo [ECAP, Friedrich-Alexander-Universitaet Erlangen-Nuernberg (Germany)

    2016-07-01

    The nEXO (next Enriched Xenon Observatory) experiment will search for the neutrinoless double beta decay of Xe-136 with a liquid xenon TPC (Time ProjectionChamber). The sensitivity of the experiment is related to the energy resolution, which itself depends on the accuracies of the measurements of the amount of drifting electrons and the number of scintillation photons with their wavelength being in the vacuum ultraviolet band. Silicon Photomultipliers (SiPM) shall be used for the detection of the scintillation light, since they can be produced extremely radiopure. Commercially available SiPM do not fulfill all requirements of the nEXO experiment, thus a dedicated development is necessary. To characterize the silicon photomultipliers, we have built a test apparatus for xenon liquefaction, in which a VUV-sensitive photomultiplier tube can be operated together with the SiPM. In this contribution we present our apparatus for the SiPM characterization measurements and our latest results on the test of the silicon photomultipliers for the detection of xenon scintillation light.

  5. Diagnosis and Quantification of Climatic Sensitivity of Carbon Fluxes in Ensemble Global Ecosystem Models

    Science.gov (United States)

    Wang, W.; Hashimoto, H.; Milesi, C.; Nemani, R. R.; Myneni, R.

    2011-12-01

    Terrestrial ecosystem models are primary scientific tools to extrapolate our understanding of ecosystem functioning from point observations to global scales as well as from the past climatic conditions into the future. However, no model is nearly perfect and there are often considerable structural uncertainties existing between different models. Ensemble model experiments thus become a mainstream approach in evaluating the current status of global carbon cycle and predicting its future changes. A key task in such applications is to quantify the sensitivity of the simulated carbon fluxes to climate variations and changes. Here we develop a systematic framework to address this question solely by analyzing the inputs and the outputs from the models. The principle of our approach is to assume the long-term (~30 years) average of the inputs/outputs as a quasi-equlibrium of the climate-vegetation system while treat the anomalies of carbon fluxes as responses to climatic disturbances. In this way, the corresponding relationships can be largely linearized and analyzed using conventional time-series techniques. This method is used to characterize three major aspects of the vegetation models that are mostly important to global carbon cycle, namely the primary production, the biomass dynamics, and the ecosystem respiration. We apply this analytical framework to quantify the climatic sensitivity of an ensemble of models including CASA, Biome-BGC, LPJ as well as several other DGVMs from previous studies, all driven by the CRU-NCEP climate dataset. The detailed analysis results are reported in this study.

  6. Modelling Nd-isotopes with a coarse resolution ocean circulation model: Sensitivities to model parameters and source/sink distributions

    International Nuclear Information System (INIS)

    Rempfer, Johannes; Stocker, Thomas F.; Joos, Fortunat; Dutay, Jean-Claude; Siddall, Mark

    2011-01-01

    The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleo-oceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143 Nd, and 144 Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and ε Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as ε Nd in good agreement with observations. ε Nd co-varies with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and ε Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects ε Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and ε Nd . Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and ε Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux

  7. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  8. Background modeling for the GERDA experiment

    Science.gov (United States)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  9. Background modeling for the GERDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Becerici-Schmidt, N. [Max-Planck-Institut für Physik, München (Germany); Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  10. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  11. Modelization of ratcheting in biaxial experiments

    International Nuclear Information System (INIS)

    Guionnet, C.

    1989-08-01

    A new unified viscoplastic constitutive equation has been developed in order to interpret ratcheting experiments on mechanical structures of fast reactors. The model is based essentially on a generalized Armstrong Frederick equation for the kinematic variable; the coefficients of the dynamic recovery term in this equation is a function of both instantaneous and accumulated inelastic strain which is allowed to vary in an appropriate manner in order to reproduce the experimental ratcheting rate. The validity of the model is verified by comparing predictions with experimental results for austenitic stainless steel (17-12 SPH) tubular specimens subjected to cyclic torsional loading under constant tensile stress at 600 0 C [fr

  12. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  13. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    Science.gov (United States)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  14. Sensitivity of the urban airshed model to mixing height profiles

    Energy Technology Data Exchange (ETDEWEB)

    Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W. [New York State Dept. of Environmental Conservation, Albany, NY (United States)

    1994-12-31

    The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.

  15. Sensitivity Analysis on LOCCW of Westinghouse typed Reactors Considering WOG2000 RCP Seal Leakage Model

    International Nuclear Information System (INIS)

    Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won

    2015-01-01

    In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario

  16. Sensitivity Analysis on LOCCW of Westinghouse typed Reactors Considering WOG2000 RCP Seal Leakage Model

    Energy Technology Data Exchange (ETDEWEB)

    Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario.

  17. Sensitivity analysis of an individual-based model for simulation of influenza epidemics.

    Directory of Open Access Journals (Sweden)

    Elaine O Nsoesie

    Full Text Available Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty

  18. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  19. A model to estimate insulin sensitivity in dairy cows

    OpenAIRE

    Holtenius, Paul; Holtenius, Kjell

    2007-01-01

    Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an i...

  20. Model of an Evaporating Drop Experiment

    Science.gov (United States)

    Rodriguez, Nicolas

    2017-11-01

    A computational model of an experimental procedure to measure vapor distributions surrounding sessile drops is developed to evaluate the uncertainty in the experimental results. Methanol, which is expected to have predominantly diffusive vapor transport, is chosen as a validation test for our model. The experimental process first uses a Fourier transform infrared spectrometer to measure the absorbance along lines passing through the vapor cloud. Since the measurement contains some errors, our model allows adding random noises to the computational integrated absorbance to mimic this. Then the resulting data are interpolated before passing through a computed tomography routine to generate the vapor distribution. Next, the gradients of the vapor distribution are computed along a given control volume surrounding the drop so that the diffusive flux can be evaluated as the net rate of diffusion out of the control volume. Our model of methanol evaporation shows that the accumulated errors of the whole experimental procedure affect the diffusive fluxes at different control volumes and are sensitive to how the noisy data of integrated absorbance are interpolated. This indicates the importance of investigating a variety of data fitting methods to choose which is best to present the data. Trinity University Mach Fellowship.

  1. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    Science.gov (United States)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  2. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Orientation sensitive deformation in Zr alloys: experimental and modeling studies

    International Nuclear Information System (INIS)

    Srivastava, D.; Keskar, N.; Manikrishna, K.V.; Dey, G.K.; Jha, S.K.; Saibaba, N.

    2016-01-01

    Zirconium alloys are used for fuel cladding and other structural components in pressurised heavy water nuclear reactors (PHWR's). Currently there is a lot of interest in developing alloys for structural components for higher temperature reactor operation. There is also need for development of cladding material with better corrosion and mechanical property of cladding material for higher and extended burn up applications. The performance of the cladding material is primarily influenced by the microstructural features of the material such as constituent phases their morphology, precipitates characteristics, nature of defects etc. Therefore, the microstructure is tailored as per the performance requirement by through controlled additions of alloying elements, thermo-mechanical- treatments. In order to obtain the desired microstructure, it is important to know the deformation behaviour of the material. Orientation dependent deformation behavior was studied in Zr using a combination of experimental and modeling (both discrete and atomistic dislocation dynamics) methods. Under the conditions of plane strain deformation, it was observed that single phase Zr, had significant extent of deformation heterogeneity based on local orientations. Discrete dislocation dynamics simulations incorporating multi slip systems had captured the orientation sensitive deformation. MD dislocations on the other hand brought the fundamental difference in various crystallographic orientations in determining the nucleating stress for the dislocations. The deformed structure has been characterized using X-ray, electron and neutron diffraction techniques. The various operating deformation mechanism will be discussed in this presentation. (author)

  4. Mass hierarchy sensitivity of medium baseline reactor neutrino experiments with multiple detectors

    Directory of Open Access Journals (Sweden)

    Hong-Xin Wang

    2017-05-01

    Full Text Available We report the neutrino mass hierarchy (MH determination of medium baseline reactor neutrino experiments with multiple detectors, where the sensitivity of measuring the MH can be significantly improved by adding a near detector. Then the impact of the baseline and target mass of the near detector on the combined MH sensitivity has been studied thoroughly. The optimal selections of the baseline and target mass of the near detector are ∼12.5 km and ∼4 kton respectively for a far detector with the target mass of 20 kton and the baseline of 52.5 km. As typical examples of future medium baseline reactor neutrino experiments, the optimal location and target mass of the near detector are selected for the specific configurations of JUNO and RENO-50. Finally, we discuss distinct effects of the reactor antineutrino energy spectrum uncertainty for setups of a single detector and double detectors, which indicate that the spectrum uncertainty can be well constrained in the presence of the near detector.

  5. Mass hierarchy sensitivity of medium baseline reactor neutrino experiments with multiple detectors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hong-Xin, E-mail: hxwang@iphy.me [Department of Physics, Nanjing University, Nanjing 210093 (China); Zhan, Liang; Li, Yu-Feng; Cao, Guo-Fu [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Chen, Shen-Jian [Department of Physics, Nanjing University, Nanjing 210093 (China)

    2017-05-15

    We report the neutrino mass hierarchy (MH) determination of medium baseline reactor neutrino experiments with multiple detectors, where the sensitivity of measuring the MH can be significantly improved by adding a near detector. Then the impact of the baseline and target mass of the near detector on the combined MH sensitivity has been studied thoroughly. The optimal selections of the baseline and target mass of the near detector are ∼12.5 km and ∼4 kton respectively for a far detector with the target mass of 20 kton and the baseline of 52.5 km. As typical examples of future medium baseline reactor neutrino experiments, the optimal location and target mass of the near detector are selected for the specific configurations of JUNO and RENO-50. Finally, we discuss distinct effects of the reactor antineutrino energy spectrum uncertainty for setups of a single detector and double detectors, which indicate that the spectrum uncertainty can be well constrained in the presence of the near detector.

  6. Studying the physics potential of long-baseline experiments in terms of new sensitivity parameters

    International Nuclear Information System (INIS)

    Singh, Mandip

    2016-01-01

    We investigate physics opportunities to constraint the leptonic CP-violation phase δ_C_P through numerical analysis of working neutrino oscillation probability parameters, in the context of long-baseline experiments. Numerical analysis of two parameters, the “transition probability δ_C_P phase sensitivity parameter (A"M)” and the “CP-violation probability δ_C_P phase sensitivity parameter (A"C"P),” as functions of beam energy and/or baseline have been carried out. It is an elegant technique to broadly analyze different experiments to constrain the δ_C_P phase and also to investigate the mass hierarchy in the leptonic sector. Positive and negative values of the parameter A"C"P, corresponding to either hierarchy in the specific beam energy ranges, could be a very promising way to explore the mass hierarchy and δ_C_P phase. The keys to more robust bounds on the δ_C_P phase are improvements of the involved detection techniques to explore lower energies and relatively long baseline regions with better experimental accuracy.

  7. Projected WIMP Sensitivity of the LUX-ZEPLIN (LZ) Dark Matter Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Akerib, D.S.; et al.

    2018-02-16

    LUX-ZEPLIN (LZ) is a next generation dark matter direct detection experiment that will operate 4850 feet underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota, USA. Using a two-phase xenon detector with an active mass of 7 tonnes, LZ will search primarily for low-energy interactions with Weakly Interacting Massive Particles (WIMPs), which are hypothesized to make up the dark matter in our galactic halo. In this paper, the projected WIMP sensitivity of LZ is presented based on the latest background estimates and simulations of the detector. For a 1000 live day run using a 5.6 tonne fiducial mass, LZ is projected to exclude at 90% confidence level spin-independent WIMP-nucleon cross sections above $1.6 \\times 10^{-48}$ cm$^{2}$ for a 40 $\\mathrm{GeV}/c^{2}$ mass WIMP. Additionally, a $5\\sigma$ discovery potential is projected reaching cross sections below the existing and projected exclusion limits of similar experiments that are currently operating. For spin-dependent WIMP-neutron(-proton) scattering, a sensitivity of $2.7 \\times 10^{-43}$ cm$^{2}$ ($8.1 \\times 10^{-42}$ cm$^{2}$) for a 40 $\\mathrm{GeV}/c^{2}$ mass WIMP is expected. With construction well underway, LZ is on track for underground installation at SURF in 2019 and will start collecting data in 2020.

  8. Sex and smoking sensitive model of radon induced lung cancer

    International Nuclear Information System (INIS)

    Zhukovsky, M.; Yarmoshenko, I.

    2006-01-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  9. Sex and smoking sensitive model of radon induced lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhukovsky, M.; Yarmoshenko, I. [Institute of Industrial Ecology of Ural Branch of Russian Academy of Sciences, Yekaterinburg (Russian Federation)

    2006-07-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  10. Improving axion detection sensitivity in high purity germanium detector based experiments

    Science.gov (United States)

    Xu, Wenqin; Elliott, Steven

    2015-04-01

    Thanks to their excellent energy resolution and low energy threshold, high purity germanium (HPGe) crystals are widely used in low background experiments searching for neutrinoless double beta decay, e.g. the MAJORANA DEMONSTRATOR and the GERDA experiments, and low mass dark matter, e.g. the CDMS and the EDELWEISS experiments. A particularly interesting candidate for low mass dark matter is the axion, which arises from the Peccei-Quinn solution to the strong CP problem and has been searched for in many experiments. Due to axion-photon coupling, the postulated solar axions could coherently convert to photons via the Primakeoff effect in periodic crystal lattices, such as those found in HPGe crystals. The conversion rate depends on the angle between axions and crystal lattices, so the knowledge of HPGe crystal axis is important. In this talk, we will present our efforts to improve the HPGe experimental sensitivity to axions by considering the axis orientations in multiple HPGe crystals simultaneously. We acknowledge the support of the U.S. Department of Energy through the LANL/LDRD Program.

  11. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM experience at LOTUS

    International Nuclear Information System (INIS)

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1989-01-01

    In recent years, the LOTUS fusion blanket facility at IGA-EPF in Lausanne provided a series of irradiation experiments with the Lithium Blanket Module (LBM). The LBM has both realistic fusion blanket and materials and configuration. It is approximately an 80-cm cube, and the breeding material is Li 2 . Using as the D-T neutron source the Haefely Neutron Generator (HNG) with an intensity of about 5·10 12 n/s, a series of experiments with the bare LBM as well as with the LBM preceded by Pb, Be and ThO 2 multipliers were carried out. In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S n -transport code ONEDANT, the two-dimensional finite element S n -transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. For the nucleonic transport calculations, three 187-neutron-group libraries are presently available: MATXS8A and MATXS8F based on ENDF/B-V evaluations and MAT187 based on JEF/EFF evaluations. COVFILS-2, a 74-group library of neutron cross-sections, scattering matrices and covariances, is the data source for SENSIBL; the 74-group structure of COVFILS-2 is a subset of the Los Alamos 187-group structure. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed

  12. Argonne Bubble Experiment Thermal Model Development III

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-11

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development” and “Argonne Bubble Experiment Thermal Model Development II”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at beam power levels between 6 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was recorded. The previous report2 described the Monte-Carlo N-Particle (MCNP) calculations and Computational Fluid Dynamics (CFD) analysis performed on the as-built solution vessel geometry. The CFD simulations in the current analysis were performed using Ansys Fluent, Ver. 17.2. The same power profiles determined from MCNP calculations in earlier work were used for the 12 and 15 kW simulations. The primary goal of the current work is to calculate the temperature profiles for the 12 and 15 kW cases using reasonable estimates for the gas generation rate, based on images of the bubbles recorded during the irradiations. Temperature profiles resulting from the CFD calculations are compared to experimental measurements.

  13. Multivariate Models for Prediction of Skin Sensitization Hazard in Humans

    Science.gov (United States)

    One of ICCVAM’s highest priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary for a substance to elicit a skin sensitization reaction suggests that no single alternative me...

  14. Model sensitivity studies of the decrease in atmospheric carbon tetrachloride

    Directory of Open Access Journals (Sweden)

    M. P. Chipperfield

    2016-12-01

    Full Text Available Carbon tetrachloride (CCl4 is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % of total, but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years. With an assumed CCl4 emission rate of 39 Gg year−1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year−1. Further progress in constraining the CCl4 budget is partly limited by

  15. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  16. The Geodynamo: Models and supporting experiments

    International Nuclear Information System (INIS)

    Mueller, U.; Stieglitz, R.

    2003-03-01

    The magnetic field is a characteristic feature of our planet Earth. It shelters the biosphere against particle radiation from the space and offers by its direction orientation to creatures. The question about its origin has challenged scientists to find sound explanations. Major progress has been achieved during the last two decades in developing dynamo models and performing corroborating laboratory experiments to explain convincingly the principle of the Earth magnetic field. The article reports some significant steps towards our present understanding of this subject and outlines in particular relevant experiments, which either substantiate crucial elements of self-excitation of magnetic fields or demonstrate dynamo action completely. The authors are aware that they have not addressed all aspects of geomagnetic studies; rather, they have selected the material from the huge amount of literature such as to motivate the recently growing interest in experimental dynamo research. (orig.)

  17. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    Energy Technology Data Exchange (ETDEWEB)

    Pastore, Giovanni, E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Swiler, L.P., E-mail: LPSwile@sandia.gov [Optimization and Uncertainty Quantification, Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185-1318 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Spencer, B.W., E-mail: Benjamin.Spencer@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Luzzi, L., E-mail: Lelio.Luzzi@polimi.it [Politecnico di Milano, Department of Energy, Nuclear Engineering Division, via La Masa 34, I-20156 Milano (Italy); Van Uffelen, P., E-mail: Paul.Van-Uffelen@ec.europa.eu [European Commission, Joint Research Centre, Institute for Transuranium Elements, Hermann-von-Helmholtz-Platz 1, D-76344 Karlsruhe (Germany); Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States)

    2015-01-15

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO{sub 2} single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.

  18. The 'OMITRON' and 'MODEL OMITRON' proposed experiments

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-12-01

    In the present paper the main features of the OMITRON and MODEL OMITRON proposed high field tokamaks are illustrated. Of the two, OMITRON is an ambitious experiment, aimed at attaining plasma burning conditions. its key physics issues are discussed, and a comparison is carried out with corresponding physics features in ignition experiments such as IGNITOR and ITER. Chief asset and chief challenge - in both OMITRON and MODEL OMITRON is the conspicuous 20 Tesla toroidal field value on the plasma axis. The advanced features of engineering which consent such a reward in terms of toroidal magnet performance are discussed in convenient depth and detail. As for the small, propaedeutic device MODEL OMITRON among its goals one must rank the purpose of testing key engineering issues in vivo, which are vital for the larger and more expensive parent device. Besides that, however - as indicated by ad hoc performed scoping studies - the smaller machine is found capable also of a number of quite interesting physics investigations in its own right

  19. Argonne Bubble Experiment Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  20. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  1. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  2. Use of Data Denial Experiments to Evaluate ESA Forecast Sensitivity Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Zack, J; Natenberg, E J; Knowe, G V; Manobianco, J; Waight, K; Hanley, D; Kamath, C

    2011-09-13

    wind speed and vertical temperature difference. Ideally, the data assimilation scheme used in the experiments would have been based upon an ensemble Kalman filter (EnKF) that was similar to the ESA method used to diagnose the Mid-Colombia Basin sensitivity patterns in the previous studies. However, the use of an EnKF system at high resolution is impractical because of the very high computational cost. Thus, it was decided to use the three-dimensional variational analysis data assimilation that is less computationally intensive and more economically practical for generating operational forecasts. There are two tasks in the current project effort designed to validate the ESA observational system deployment approach in order to move closer to the overall goal: (1) Perform an Observing System Experiment (OSE) using a data denial approach which is the focus of this task and report; and (2) Conduct a set of Observing System Simulation Experiments (OSSE) for the Mid-Colombia basin region. The results of this task are presented in a separate report. The objective of the OSE task involves validating the ESA-MOOA results from the previous sensitivity studies for the Mid-Columbia Basin by testing the impact of existing meteorological tower measurements on the 0- to 6-hour ahead 80-m wind forecasts at the target locations. The testing of the ESA-MOOA method used a combination of data assimilation techniques and data denial experiments to accomplish the task objective.

  3. Methane emissions from rice paddies. Experiments and modelling

    International Nuclear Information System (INIS)

    Van Bodegom, P.M.

    2000-01-01

    This thesis describes model development and experimentation on the comprehension and prediction of methane (CH4) emissions from rice paddies. The large spatial and temporal variability in CH4 emissions and the dynamic non-linear relationships between processes underlying CH4 emissions impairs the applicability of empirical relations. Mechanistic concepts are therefore starting point of analysis throughout the thesis. The process of CH4 production was investigated by soil slurry incubation experiments at different temperatures and with additions of different electron donors and acceptors. Temperature influenced conversion rates and the competitiveness of microorganisms. The experiments were used to calibrate and validate a mechanistic model on CH4 production that describes competition for acetate and H2/CO2, inhibition effects and chemolithotrophic reactions. The redox sequence leading eventually to CH4 production was well predicted by the model, calibrating only the maximum conversion rates. Gas transport through paddy soil and rice plants was quantified by experiments in which the transport of SF6 was monitored continuously by photoacoustics. A mechanistic model on gas transport in a flooded rice system based on diffusion equations was validated by these experiments and could explain why most gases are released via plant mediated transport. Variability in root distribution led to highly variable gas transport. Experiments showed that CH4 oxidation in the rice rhizosphere was oxygen (O2) limited. Rice rhizospheric O2 consumption was dominated by chemical iron oxidation, and heterotrophic and methanotrophic respiration. The most abundant methanotrophs and heterotrophs were isolated and kinetically characterised. Based upon these experiments it was hypothesised that CH4 oxidation mainly occurred at microaerophilic, low acetate conditions not very close to the root surface. A mechanistic rhizosphere model that combined production and consumption of O2, carbon and iron

  4. Numerical model analysis of the shaded dye-sensitized solar cell module

    International Nuclear Information System (INIS)

    Chen Shuanghong; Weng Jian; Huang Yang; Zhang Changneng; Hu Linhua; Kong Fantai; Wang Lijun; Dai Songyuan

    2010-01-01

    On the basis of a numerical model analysis, the photovoltaic performance of a partially shadowed dye-sensitized solar cell (DSC) module is investigated. In this model, the electron continuity equation and the Butler-Vollmer equation are applied considering electron transfer via the interface of transparent conducting oxide/electrolyte in the shaded DSC. The simulation results based on this model are consistent with experimental results. The influence of shading ratio, connection types and the intensity of irradiance has been analysed according to experiments and numerical simulation. It is found that the performance of the DSC obviously declines with an increase in the shaded area due to electron recombination at the TCO/electrolyte interface and that the output power loss of the shadowed DSC modules in series is much larger than that in parallel due to the 'breakdown' occurring at the TCO/electrolyte interface. The impact of shadow on the DSC performance is stronger with increase in irradiation intensity.

  5. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    Science.gov (United States)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  6. A proposed experiment on ball lightning model

    International Nuclear Information System (INIS)

    Ignatovich, Vladimir K.; Ignatovich, Filipp V.

    2011-01-01

    Highlights: → We propose to put a glass sphere inside an excited gas. → Then to put a light ray inside the glass in a whispering gallery mode. → If the light is resonant to gas excitation, it will be amplified at every reflection. → In ms time the light in the glass will be amplified, and will melt the glass. → A liquid shell kept integer by electrostriction forces is the ball lightning model. -- Abstract: We propose an experiment for strong light amplification at multiple total reflections from active gaseous media.

  7. Multiaxial behavior of foams - Experiments and modeling

    Science.gov (United States)

    Maheo, Laurent; Guérard, Sandra; Rio, Gérard; Donnard, Adrien; Viot, Philippe

    2015-09-01

    Cellular materials are strongly related to pressure level inside the material. It is therefore important to use experiments which can highlight (i) the pressure-volume behavior, (ii) the shear-shape behavior for different pressure level. Authors propose to use hydrostatic compressive, shear and combined pressure-shear tests to determine cellular materials behavior. Finite Element Modeling must take into account these behavior specificities. Authors chose to use a behavior law with a Hyperelastic, a Viscous and a Hysteretic contributions. Specific developments has been performed on the Hyperelastic one by separating the spherical and the deviatoric part to take into account volume change and shape change characteristics of cellular materials.

  8. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  9. The identification of model effective dimensions using global sensitivity analysis

    International Nuclear Information System (INIS)

    Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang

    2011-01-01

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  10. The identification of model effective dimensions using global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)

    2011-04-15

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  11. Implementation of the model project: Ghanaian experience

    International Nuclear Information System (INIS)

    Schandorf, C.; Darko, E.O.; Yeboah, J.; Asiamah, S.D.

    2003-01-01

    Upgrading of the legal infrastructure has been the most time consuming and frustrating part of the implementation of the Model project due to the unstable system of governance and rule of law coupled with the low priority given to legislation on technical areas such as safe applications of Nuclear Science and Technology in medicine, industry, research and teaching. Dwindling Governmental financial support militated against physical and human resource infrastructure development and operational effectiveness. The trend over the last five years has been to strengthen the revenue generation base of the Radiation Protection Institute through good management practices to ensure a cost effective use of the limited available resources for a self-reliant and sustainable radiation and waste safety programme. The Ghanaian experience regarding the positive and negative aspects of the implementation of the Model Project is highlighted. (author)

  12. Forces between permanent magnets: experiments and model

    International Nuclear Information System (INIS)

    González, Manuel I

    2017-01-01

    This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r −4 at large distances, as expected. (paper)

  13. Bucky gel actuator displacement: experiment and model

    International Nuclear Information System (INIS)

    Ghamsari, A K; Zegeye, E; Woldesenbet, E; Jin, Y

    2013-01-01

    Bucky gel actuator (BGA) is a dry electroactive nanocomposite which is driven with a few volts. BGA’s remarkable features make this tri-layered actuator a potential candidate for morphing applications. However, most of these applications would require a better understanding of the effective parameters that influence the BGA displacement. In this study, various sets of experiments were designed to investigate the effect of several parameters on the maximum lateral displacement of BGA. Two input parameters, voltage and frequency, and three material/design parameters, carbon nanotube type, thickness, and weight fraction of constituents were selected. A new thickness ratio term was also introduced to study the role of individual layers on BGA displacement. A model was established to predict BGA maximum displacement based on the effect of these parameters. This model showed good agreement with reported results from the literature. In addition, an important factor in the design of BGA-based devices, lifetime, was investigated. (paper)

  14. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    Science.gov (United States)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  15. IATA-Bayesian Network Model for Skin Sensitization Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Since the publication of the Adverse Outcome Pathway (AOP) for skin sensitization, there have been many efforts to develop systematic approaches to integrate the...

  16. Multivariate Models for Prediction of Human Skin Sensitization Hazard.

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensiti...

  17. Sensitivity analysis of efficiency thermal energy storage on selected rock mass and grout parameters using design of experiment method

    International Nuclear Information System (INIS)

    Wołoszyn, Jerzy; Gołaś, Andrzej

    2014-01-01

    Highlights: • Paper propose a new methodology to sensitivity study of underground thermal storage. • Using MDF model and DOE technique significantly shorter of calculations time. • Calculation of one time step was equal to approximately 57 s. • Sensitivity study cover five thermo-physical parameters. • Conductivity of rock mass and grout material have a significant impact on efficiency. - Abstract: The aim of this study was to investigate the influence of selected parameters on the efficiency of underground thermal energy storage. In this paper, besides thermal conductivity, the effect of such parameters as specific heat, density of the rock mass, thermal conductivity and specific heat of grout material was investigated. Implementation of this objective requires the use of an efficient computational method. The aim of the research was achieved by using a new numerical model, Multi Degree of Freedom (MDF), as developed by the authors and Design of Experiment (DoE) techniques with a response surface. The presented methodology can significantly reduce the time that is needed for research and to determine the effect of various parameters on the efficiency of underground thermal energy storage. Preliminary results of the research confirmed that thermal conductivity of the rock mass has the greatest impact on the efficiency of underground thermal energy storage, and that other parameters also play quite significant role

  18. Modeling reproducibility of porescale multiphase flow experiments

    Science.gov (United States)

    Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.

    2017-12-01

    Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  19. A model to estimate insulin sensitivity in dairy cows

    Directory of Open Access Journals (Sweden)

    Holtenius Kjell

    2007-10-01

    Full Text Available Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI is based on plasma concentrations of glucose, insulin and free fatty acids (FFA and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.

  20. Successful Renal Transplantation with Desensitization in Highly Sensitized Patients: A Single Center Experience

    Science.gov (United States)

    Yoon, Hye Eun; Hyoung, Bok Jin; Hwang, Hyeon Seok; Lee, So Young; Jeon, Youn Joo; Song, Joon Chang; Oh, Eun-Jee; Park, Sun Cheol; Choi, Bum Soon; Moon, In Sung; Kim, Yong Soo

    2009-01-01

    Intravenous immunoglobulin (IVIG) and/or plasmapheresis (PP) are effective in preventing antibody-mediated rejection (AMR) of kidney allografts, but AMR is still a problem. This study reports our experience in living donor renal transplantation in highly sensitized patients. Ten patients with positive crossmatch tests or high levels of panel-reactive antibody (PRA) were included. Eight patients were desensitized with pretransplant PP and low dose IVIG, and two were additionally treated with rituximab. Allograft function, number of acute rejection (AR) episodes, protocol biopsy findings, and the presence of donor-specific antibody (DSA) were evaluated. With PP/IVIG, six out of eight patients showed good graft function without AR episodes. Protocol biopsies revealed no evidence of tissue injury or C4d deposits. Of two patients with AR, one was successfully treated with PP/IVIG, but the other lost graft function due to de novo production of DSA. Thereafter, rituximab was added to PP/IVIG in two cases. Rituximab gradually decreased PRA levels and the percentage of peripheral CD20+ cells. DSA was undetectable and protocol biopsy showed no C4d deposits. The graft function was stable and there were no AR episodes. Conclusively, desensitization using PP/IVIG with or without rituximab increases the likelihood of successful living donor renal transplantation in sensitized recipients. PMID:19194545

  1. Shielding benchmark experiments and sensitivity studies in progress at some European laboratories

    International Nuclear Information System (INIS)

    Hehn, G.; Mattes, M.; Matthes, W.; Nicks, R.; Rief, H.

    1975-01-01

    A 100 group standard library based on ENDF/B3 has been prepared by IKE and JRC. This library is used for the analysis of the current European and Japanese iron benchmark experiments. Further measurements are planned for checking the data sets for graphite, sodium and water. In a cooperation between the IKE and JRC groups coupled neutron-photon cross section sets will be produced. Point data are processed at IKE by the modular program system RSYST (CDC 6600) for elaborating the ENDFB data, whereas the JRC group, apart from using standard codes such as SUPERTOG 3, GAMLEG etc., has developed a series of auxiliary programs (IBM 360) for handling the DLC 2D and POPOP libraries and for producing the combined neutron-plus gamma library EL4 (119 groups). Sensitivity studies (in progress at IKE) make possible improvements in methods and optimization of calculation efforts for establishing group data. A tentative sensitivity study for a 3 dimensional MC approach is in progress at Ispra. As for nuclear data evaluation, the JRC group is calculating barium cross sections and their associated gamma spectra. 6 figures

  2. The use of graph theory in the sensitivity analysis of the model output: a second order screening method

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Braddock, Roger

    1999-01-01

    Sensitivity analysis screening methods aim to isolate the most important factors in experiments involving a large number of significant factors and interactions. This paper extends the one-factor-at-a-time screening method proposed by Morris. The new method, in addition to the 'overall' sensitivity measures already provided by the traditional Morris method, offers estimates of the two-factor interaction effects. The number of model evaluations required is O(k 2 ), where k is the number of model input factors. The efficient sampling strategy in the parameter space is based on concepts of graph theory and on the solution of the 'handcuffed prisoner problem'

  3. Mathematical Model of Nicholson’s Experiment

    Directory of Open Access Journals (Sweden)

    Sergey D. Glyzin

    2017-01-01

    Full Text Available Considered  is a mathematical model of insects  population dynamics,  and  an attempt is made  to explain  classical experimental results  of Nicholson with  its help.  In the  first section  of the paper  Nicholson’s experiment is described  and dynamic  equations  for its modeling are chosen.  A priori estimates  for model parameters can be made more precise by means of local analysis  of the  dynamical system,  that is carried  out in the second section.  For parameter values found there  the stability loss of the  problem  equilibrium  of the  leads to the  bifurcation of a stable  two-dimensional torus.   Numerical simulations  based  on the  estimates  from the  second section  allows to explain  the  classical Nicholson’s experiment, whose detailed  theoretical substantiation is given in the last section.  There for an atrractor of the  system  the  largest  Lyapunov  exponent is computed. The  nature of this  exponent change allows to additionally narrow  the area of model parameters search.  Justification of this experiment was made possible  only  due  to  the  combination of analytical and  numerical  methods  in studying  equations  of insects  population dynamics.   At the  same time,  the  analytical approach made  it possible to perform numerical  analysis  in a rather narrow  region of the  parameter space.  It is not  possible to get into this area,  based only on general considerations.

  4. Cross-section sensitivity and uncertainty analysis of the FNG copper benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kodeli, I., E-mail: ivan.kodeli@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Kondo, K. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany); Japan Atomic Energy Agency, Rokkasho-mura (Japan); Perel, R.L. [Racah Institute of Physics, Hebrew University of Jerusalem, IL-91904 Jerusalem (Israel); Fischer, U. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany)

    2016-11-01

    A neutronics benchmark experiment on copper assembly was performed end 2014–beginning 2015 at the 14-MeV Frascati neutron generator (FNG) of ENEA Frascati with the objective to provide the experimental database required for the validation of the copper nuclear data relevant for ITER design calculations, including the related uncertainties. The paper presents the pre- and post-analysis of the experiment performed using cross-section sensitivity and uncertainty codes, both deterministic (SUSD3D) and Monte Carlo (MCSEN5). Cumulative reaction rates and neutron flux spectra, their sensitivity to the cross sections, as well as the corresponding uncertainties were estimated for different selected detector positions up to ∼58 cm in the copper assembly. This permitted in the pre-analysis phase to optimize the geometry, the detector positions and the choice of activation reactions, and in the post-analysis phase to interpret the results of the measurements and the calculations, to conclude on the quality of the relevant nuclear cross-section data, and to estimate the uncertainties in the calculated nuclear responses and fluxes. Large uncertainties in the calculated reaction rates and neutron spectra of up to 50%, rarely observed at this level in the benchmark analysis using today's nuclear data, were predicted, particularly high for fast reactions. Observed C/E (dis)agreements with values as low as 0.5 partly confirm these predictions. Benchmark results are therefore expected to contribute to the improvement of both cross section as well as covariance data evaluations.

  5. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  6. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    OpenAIRE

    Angelen, J. H.; Lenaerts, J. T. M.; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.; Broeke, M. R.; Meijgaard, E.; Smeets, C. J. P. P.

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6%) at the K-transect (west Greenland) for the period 2004–2009 is...

  8. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  9. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  10. Micro Wire-Drawing: Experiments And Modelling

    International Nuclear Information System (INIS)

    Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L.

    2007-01-01

    In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating models of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive model seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material model has been proposed and implemented in a FE code giving a better agreement with the experiments

  11. Parametric sensitivity of a CFD model concerning the hydrodynamics of trickle-bed reactor (TBR

    Directory of Open Access Journals (Sweden)

    Janecki Daniel

    2016-03-01

    Full Text Available The aim of the present study was to investigate the sensitivity of a multiphase Eulerian CFD model with respect to relations defining drag forces between phases. The mean relative error as well as standard deviation of experimental and computed values of pressure gradient and average liquid holdup were used as validation criteria of the model. Comparative basis for simulations was our own data-base obtained in experiments carried out in a TBR operating at a co-current downward gas and liquid flow. Estimated errors showed that the classical equations of Attou et al. (1999 defining the friction factors Fjk approximate experimental values of hydrodynamic parameters with the best agreement. Taking this into account one can recommend to apply chosen equations in the momentum balances of TBR.

  12. Gamma ray induced sensitization in CaSO4:Dy and competing trap model

    International Nuclear Information System (INIS)

    Nagpal, J.S.; Kher, R.K.; Gangadharan, P.

    1979-01-01

    Gamma ray induced sensitization in CaSO 4 :Dy has been compared (by measurement of TL glow curves) for different temperatures during irradiation (25 0 , 120 0 and 250 0 C). Enhanced sensitization at elevated temperatures seems to support the competing trap model for supralinearity and sensitization in CaSO 4 :Dy. (author)

  13. Influence of Ethnic-Related Diversity Experiences on Intercultural Sensitivity of Students at a Public University in Malaysia

    Science.gov (United States)

    Tamam, Ezhar; Abdullah, Ain Nadzimah

    2012-01-01

    In this study, the authors examine the influence of ethnic-related diversity experiences on intercultural sensitivity among Malaysian students at a multiethnic, multicultural and multilingual Malaysian public university. Results reveal a significant differential level of ethnic-related diversity experiences (but not at the level of intercultural…

  14. Danish heathland manipulation experiment data in Model-Data-Fusion

    Science.gov (United States)

    Thum, Tea; Peylin, Philippe; Ibrom, Andreas; Van Der Linden, Leon; Beier, Claus; Bacour, Cédric; Santaren, Diego; Ciais, Philippe

    2013-04-01

    In ecosystem manipulation experiments (EMEs) the ecosystem is artificially exposed to different environmental conditions that aim to simulate circumstances in future climate. At Danish EME site Brandbjerg the responses of a heathland to drought, warming and increased atmospheric CO2 concentration are studied. The warming manipulation is realized by passive nighttime warming. The measurements include control plots as well as replicates for each three treatment separately and in combination. The Brandbjerg heathland ecosystem is dominated by heather and wavy hairgrass. These experiments provide excellent data for validation and development of ecosystem models. In this work we used a generic vegetation model ORCHIDEE with Model-Data-Fusion (MDF) approach. ORCHIDEE model is a process-based model that describes the exchanges of carbon, water and energy between the atmosphere and the vegetation. It can be run at different spatial scales from global to site level. Different vegetation types are described in ORCHIDEE as plant functional types. In MDF we are using observations from the site to optimize the model parameters. This enables us to assess the modelling errors and the performance of the model for different manipulation treatments. This insight will inform us whether the different processes are adequately modelled or if the model is missing some important processes. We used a genetic algorithm in the MDF. The data available from the site included measurements of aboveground biomass, heterotrophic soil respiration and total ecosystem respiration from years 2006-2008. The biomass was measured six times doing this period. The respiration measurements were done with manual chamber measurements. For the soil respiration we used results from an empirical model that has been developed for the site. This enabled us to have more data for the MDF. Before the MDF we performed a sensitivity analysis of the model parameters to different data streams. Fifteen most influential

  15. Sensitive analysis and modifications to reflood-related constitutive models of RELAP5

    International Nuclear Information System (INIS)

    Li Dong; Liu Xiaojing; Yang Yanhua

    2014-01-01

    Previous system code calculation reveals that the cladding temperature is underestimated and quench front appears too early during reflood process. To find out the parameters shows important effect on the results, sensitive analysis is performed on parameters of constitutive physical models. Based on the phenomenological and theoretical analysis, four parameters are selected: wall to vapor film boiling heat transfer coefficient, wall to liquid film boiling heat transfer coefficient, dry wall interfacial friction coefficient and minimum droplet diameter. In order to improve the reflood simulation ability of RELAP5 code, the film boiling heat transfer model and dry wall interfacial friction model which are corresponding models of those influential parameters are studied. Modifications have been made and installed into RELAP5 code. Six tests of FEBA are simulated by RELAP5 to study the predictability of reflood-related physical models. A dispersed flow film boiling heat transfer (DFFB) model is applied when void fraction is above 0.9. And a factor is multiplied to the post-CHF drag coefficient to fit the experiment better. Finally, the six FEBA tests are calculated again so as to assess the modifications. Better results are obtained which prove the advantage of the modified models. (author)

  16. Photogrammetry experiments with a model eye.

    Science.gov (United States)

    Rosenthal, A R; Falconer, D G; Pieper, I

    1980-01-01

    Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139

  17. Pipe missile impact experiments on concrete models

    International Nuclear Information System (INIS)

    McHugh, S.; Gupta, Y.; Seaman, L.

    1981-06-01

    The experiments described in this study are a part of SRI studies for EPRI on the local response of reinforced concrete panels to missile impacts. The objectives of this task were to determine the feasibility of using scale model tests to reproduce the impact response of reinforced concrete panels observed in full-scale tests with pipe missiles and to evaluate the effect of concrete strength on the impact response. The experimental approach consisted of replica scaling: the missile and target materials were similar to those used in the full-scale tests, with all dimensions scaled by 5/32. Four criteria were selected for comparing the scaled and full-scale test results: frontface penetration, backface scabbing threshold, internal cracking in the panel, and missile deformation

  18. Josephson cross-sectional model experiment

    International Nuclear Information System (INIS)

    Ketchen, M.B.; Herrell, D.J.; Anderson, C.J.

    1985-01-01

    This paper describes the electrical design and evaluation of the Josephson cross-sectional model (CSM) experiment. The experiment served as a test vehicle to verify the operation at liquid-helium temperatures of Josephson circuits integrated in a package environment suitable for high-performance digital applications. The CSM consisted of four circuit chips assembled on two cards in a three-dimensional card-on-board package. The chips (package) were fabricated in a 2.5-μm (5-μm) minimum linewidth Pb-alloy technology. A hierarchy of solder and pluggable connectors was used to attach the parts together and to provide electrical interconnections between parts. A data path which simulated a jump control sequence and a cache access in each machine cycle was successfully operated with cycle times down to 3.7 ns. The CSM incorporated the key components of the logic, power, and package of a prototype Josephson signal processor and demonstrated the feasibility of making such a processor with a sub-4-ns cycle time

  19. Bayesian randomized item response modeling for sensitive measurements

    NARCIS (Netherlands)

    Avetisyan, Marianna

    2012-01-01

    In behavioral, health, and social sciences, any endeavor involving measurement is directed at accurate representation of the latent concept with the manifest observation. However, when sensitive topics, such as substance abuse, tax evasion, or felony, are inquired, substantial distortion of reported

  20. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  1. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  2. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  3. Characteristics of coupled atmosphere-ocean CO2 sensitivity experiments with different ocean formulations

    International Nuclear Information System (INIS)

    Washington, W.M.; Meehl, G.A.

    1990-01-01

    The Community Climate Model at the National Center for Atmospheric Research has been coupled to a simple mixed-layer ocean model and to a coarse-grid ocean general circulation model (OGCM). This paper compares the responses of simulated climate to increases of atmospheric carbon dioxide (CO 2 ) in these two coupled models. Three types of simulations were run: (1) control runs with both ocean models, with CO 2 held constant at present-day concentrations, (2) instantaneous doubling of atmospheric CO 2 (from 330 to 660 ppm) with both ocean models, and (3) a gradually increasing (transient) CO 2 concentration starting at 330 ppm and increasing linearly at 1% per year, with the OGCM. The mixed-layer and OGCM cases exhibit increases of 3.5 C and 1.6 C, respectively, in globally averaged surface air temperature for the instantaneous doubling cases. The transient-forcing case warms 0.7 C by the end of 30 years. The mixed-layer ocean yields warmer-than-observed tropical temperatures and colder-than-observed temperatures in the higher latitudes. The coarse-grid OGCM simulates lower-than-observed sea surface temperatures (SSTs) in the tropics and higher-than-observed SSTs and reduced sea-ice extent at higher latitudes. Sensitivity in the OGCM after 30 years is much lower than in simulations with the same atmosphere coupled to a 50-m slab-ocean mixed layer. The OGCM simulates a weaker thermohaline circulation with doubled CO 2 as the high-latitude ocean-surface layer warms and freshens and the westerly wind stress decreases. Convective overturning in the OGCM decreases substantially with CO 2 warming

  4. Characteristics of coupled atmosphere-ocean CO2 sensitivity experiments with different ocean formulations

    International Nuclear Information System (INIS)

    Washington, W.M.; Meehl, G.A.

    1991-01-01

    The Community Climate Model at the National Center for Atmospheric Research has been coupled to a simple mixed-layer ocean model and to a coarse-grid ocean general circulation model (OGCM). This paper compares the responses of simulated climate to increases of atmospheric carbon dioxide (CO 2 ) in these two coupled models. Three types of simulations were run: (1) control runs with both ocean models, with CO 2 held constant at present-day concentrations, (2) instantaneous doubling of atmospheric CO 2 (from 330 to 660 ppm) with both ocean models, and (3) a gradually increasing (transient) CO 2 concentration starting at 330 ppm and increasing linearly at 1% per year, with the OGCM. The mixed-layer and OGCM cases exhibit increases of 3.5 C and 1.6 C, respectively, in globally averaged surface air temperature for the instantaneous doubling cases. The transient-forcing case warms 0.7 C by the end of 30 years. The mixed-layer ocean yields warmer-than-observed tropical temperatures and colder-than-observed temperatures in the higher latitudes. The coarse-grid OGCM simulates lower-than-observed sea surface temperatures (SSTs) in the tropics and higher-than-observed SSTs and reduced sea-ice extent at higher latitudes. Sensitivity in the OGCM after 30 years is much lower than in simulations with the same atmosphere coupled to a 50-m slab-ocean mixed layer. The OGCM simulates a weaker thermohaline circulation with doubled CO 2 as the high-latitude ocean-surface layer warms and freshens and the westerly wind stress decreases. Convective overturning in the OGCM decreases substantially with CO 2 warming. 46 refs.; 20 figs.; 1 tab

  5. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  7. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  8. Towards gender-sensitivity: the Philippine POPCOM experience. How sensitive to gender issues are our family planning personnel?

    Science.gov (United States)

    Danguilan, M

    1995-04-01

    The Philippine Commission on Population (POPCOM) sets and coordinates the country's population policy. POPCOM launched Gender I in early 1994 in the attempt to find out how aware and sensitive its board of commissioners, staff, and the provincial and city population officers were on gender and population issues. The assessment covered the respondents' gender relations at the workplace; gender, work, and family responsibilities; job satisfaction; their perceptions about gender-related issues in reproductive health; personal sex attitudes; and general perceptions on gender issues. The project also explored respondents' knowledge and perceptions on population growth and structure; population information generation and use; quality of life; reproductive health; law, ethics, and policy; and men's and women's roles. Having completed the institutional assessment, POPCOM has now implemented the Gender II project designed to strengthen the formulation, coordination, and implementation of gender-aware population and reproductive health policies and programs. Project activities include policy review and framework development, capability building through gender and reproductive health training and information management, and special research projects.

  9. Encoding Context-Sensitivity in Reo into Non-Context-Sensitive Semantic Models (Technical Report)

    NARCIS (Netherlands)

    S.-S.T.Q. Jongmans (Sung-Shik); , C. (born Köhler, , C.) Krause (Christian); F. Arbab (Farhad)

    2011-01-01

    textabstractReo is a coordination language which can be used to model the interactions among a set of components or services in a compositional manner using connectors. The language concepts of Reo include synchronization, mutual exclusion, data manipulation, memory and context-dependency.

  10. The CANDELLE experiment for characterization of neutron sensitivity of LiF TLDs

    Science.gov (United States)

    Guillou, M. Le; Billebaud, A.; Gruel, A.; Kessedjian, G.; Méplan, O.; Destouches, C.; Blaise, P.

    2018-01-01

    As part of the design studies conducted at CEA for future power and research nuclear reactors, the validation of neutron and photon calculation schemes related to nuclear heating prediction are strongly dependent on the implementation of nuclear heating measurements. Such measurements are usually performed in low-power reactors, whose core dimensions are accurately known and where irradiation conditions (power, flux and temperature) are entirely controlled. Due to the very low operating power of such reactors (of the order of 100 W), nuclear heating is assessed by using dosimetry techniques such as thermoluminescent dosimeters (TLDs). However, although they are highly sensitive to gamma radiation, such dosimeters are also, to a lesser extent, sensitive to neutrons. The neutron dose depends strongly on the TLD composition, typically contributing to 10-30% of the total measured dose in a mixed neutron/gamma field. The experimental determination of the neutron correction appears therefore to be crucial to a better interpretation of doses measured in reactor with reduced uncertainties. A promising approach based on the use of two types of LiF TLDs respectively enriched with lithium-6 and lithium-7, precalibrated both in photon and neutron fields, has been recently developed at INFN (Milan, Italy) for medical purposes. The CANDELLE experiment is dedicated to the implementation of a pure neutron field "calibration" of TLDs by using the GENEPI-2 neutron source of LPSC (Grenoble, France). Those irradiation conditions allowed providing an early assessment of the neutron components of doses measured in EOLE reactor at CEA Cadarache with 10% uncertainty at 1σ.

  11. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  12. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  13. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    Energy Technology Data Exchange (ETDEWEB)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  14. The Coda of the Transient Response in a Sensitive Cochlea: A Computational Modeling Study.

    Directory of Open Access Journals (Sweden)

    Yizeng Li

    2016-07-01

    Full Text Available In a sensitive cochlea, the basilar membrane response to transient excitation of any kind-normal acoustic or artificial intracochlear excitation-consists of not only a primary impulse but also a coda of delayed secondary responses with varying amplitudes but similar spectral content around the characteristic frequency of the measurement location. The coda, sometimes referred to as echoes or ringing, has been described as a form of local, short term memory which may influence the ability of the auditory system to detect gaps in an acoustic stimulus such as speech. Depending on the individual cochlea, the temporal gap between the primary impulse and the following coda ranges from once to thrice the group delay of the primary impulse (the group delay of the primary impulse is on the order of a few hundred microseconds. The coda is physiologically vulnerable, disappearing when the cochlea is compromised even slightly. The multicomponent sensitive response is not yet completely understood. We use a physiologically-based, mathematical model to investigate (i the generation of the primary impulse response and the dependence of the group delay on the various stimulation methods, (ii the effect of spatial perturbations in the properties of mechanically sensitive ion channels on the generation and separation of delayed secondary responses. The model suggests that the presence of the secondary responses depends on the wavenumber content of a perturbation and the activity level of the cochlea. In addition, the model shows that the varying temporal gaps between adjacent coda seen in experiments depend on the individual profiles of perturbations. Implications for non-invasive cochlear diagnosis are also discussed.

  15. Application of the pertubation theory to a two channels model for sensitivity calculations in PWR cores

    International Nuclear Information System (INIS)

    Oliveira, A.C.J.G. de; Andrade Lima, F.R. de

    1989-01-01

    The present work is an application of the perturbation theory (Matricial formalism) to a simplified two channels model, for sensitivity calculations in PWR cores. Expressions for some sensitivity coefficients of thermohydraulic interest were developed from the proposed model. The code CASNUR.FOR was written in FORTRAN to evaluate these sensitivity coefficients. The comparison between results obtained from the matrical formalism of pertubation theory with those obtained directly from the two channels model, makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations. (author) [pt

  16. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    Science.gov (United States)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This

  17. Defining and detecting structural sensitivity in biological models: developing a new framework.

    Science.gov (United States)

    Adamson, M W; Morozov, A Yu

    2014-12-01

    When we construct mathematical models to represent biological systems, there is always uncertainty with regards to the model specification--whether with respect to the parameters or to the formulation of model functions. Sometimes choosing two different functions with close shapes in a model can result in substantially different model predictions: a phenomenon known in the literature as structural sensitivity, which is a significant obstacle to improving the predictive power of biological models. In this paper, we revisit the general definition of structural sensitivity, compare several more specific definitions and discuss their usefulness for the construction and analysis of biological models. Then we propose a general approach to reveal structural sensitivity with regards to certain system properties, which considers infinite-dimensional neighbourhoods of the model functions: a far more powerful technique than the conventional approach of varying parameters for a fixed functional form. In particular, we suggest a rigorous method to unearth sensitivity with respect to the local stability of systems' equilibrium points. We present a method for specifying the neighbourhood of a general unknown function with [Formula: see text] inflection points in terms of a finite number of local function properties, and provide a rigorous proof of its completeness. Using this powerful result, we implement our method to explore sensitivity in several well-known multicomponent ecological models and demonstrate the existence of structural sensitivity in these models. Finally, we argue that structural sensitivity is an important intrinsic property of biological models, and a direct consequence of the complexity of the underlying real systems.

  18. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  19. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  20. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  1. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  2. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  3. Detection of C',Cα correlations in proteins using a new time- and sensitivity-optimal experiment

    International Nuclear Information System (INIS)

    Lee, Donghan; Voegeli, Beat; Pervushin, Konstantin

    2005-01-01

    Sensitivity- and time-optimal experiment, called COCAINE (CO-CA In- and aNtiphase spectra with sensitivity Enhancement), is proposed to correlate chemical shifts of 13 C' and 13 C α spins in proteins. A comparison of the sensitivity and duration of the experiment with the corresponding theoretical unitary bounds shows that the COCAINE experiment achieves maximum possible transfer efficiency in the shortest possible time, and in this sense the sequence is optimal. Compared to the standard HSQC, the COCAINE experiment delivers a 2.7-fold gain in sensitivity. This newly proposed experiment can be used for assignment of backbone resonances in large deuterated proteins effectively bridging 13 C' and 13 C α resonances in adjacent amino acids. Due to the spin-state selection employed, the COCAINE experiment can also be used for efficient measurements of one-bond couplings (e.g. scalar and residual dipolar couplings) in any two-spin system (e.g. the N/H in the backbone of protein)

  4. Radon emanation chamber: High sensitivity measurements for the SuperNEMO experiment

    Energy Technology Data Exchange (ETDEWEB)

    Soulé, B. [Université Bordeaux 1, Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, Chemin du Solarium, Le Haut-Vigneau, BP120, F-33175 Gradignan (France); Collaboration: SuperNEMO Collaboration; and others

    2013-08-08

    Radon is a well-known source of background in ββ0ν experiments due to the high Q{sub β} value of one of its daughter nucleus, {sup 214}Bi. The SuperNEMO collaboration requires a maximum radon contamination of 0.1 mBq/m{sup 3} inside its next-generation double beta decay detector. To reach such a low activity, a drastic screening process has been set for the selection of the detector's materials. In addition to a good radiopurity, a low emanation rate is required. To test this parameter, a Radon Emanation Setup is running at CENBG. It consists in a large emanation chamber connected to an electrostatic detector. By measuring large samples and having a low background level, this setup reaches a sensitivity of a few μ Bq. m{sup −2}. d{sup −1} and is able to qualify materials used in the construction of the SuperNEMO detector.

  5. Wind climate estimation using WRF model output: method and model sensitivities over the sea

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Vincent, Claire Louise; Peña, Alfredo

    2015-01-01

    setup parameters. The results of the year-long sensitivity simulations show that the long-term mean wind speed simulated by the WRF model offshore in the region studied is quite insensitive to the global reanalysis, the number of vertical levels, and the horizontal resolution of the sea surface...... temperature used as lower boundary conditions. Also, the strength and form (grid vs spectral) of the nudging is quite irrelevant for the mean wind speed at 100 m. Large sensitivity is found to the choice of boundary layer parametrization, and to the length of the period that is discarded as spin-up to produce...... a wind climatology. It is found that the spin-up period for the boundary layer winds is likely larger than 12 h over land and could affect the wind climatology for points offshore for quite a distance downstream from the coast....

  6. Synthesis of humidity sensitive zinc stannate nanomaterials and modelling of Freundlich adsorption isotherm model

    Science.gov (United States)

    Sharma, Alfa; Kumar, Yogendra; Shirage, Parasharam M.

    2018-04-01

    The chemi-resistive humidity sensing behaviour of as prepared and annealed ZnSnO3 nanoparticles synthesized using a wet chemical synthesis method was investigated. The effect of stirring temperature over the evolution of varied nanomorphology of zinc stannate is in accordance to Ostwald's ripening law. At room temperature, an excellent humidity sensitivity of ˜800% and response/recovery time of 70s./102s. is observed for ZnSnO3 sample within 08-97% relative humidity range. The experimental data observed over the entire range of RH values well fitted with the Freundlich adsorption isotherm model, and revealing two distinct water adsorption regimes. The excellent humidity sensitivity observed in the nanostructures is attributed to Grotthuss mechanism considering the availability and distribution of available adsorption sites. This present result proposes utilization of low cost synthesis technique of ZnSnO3 holds the promising capabilities as potential candidate for the fabrication of next generation humidity sensors.

  7. Uncovering the influence of social skills and psychosociological factors on pain sensitivity using structural equation modeling.

    Science.gov (United States)

    Tanaka, Yoichi; Nishi, Yuki; Nishi, Yuki; Osumi, Michihiro; Morioka, Shu

    2017-01-01

    Pain is a subjective emotional experience that is influenced by psychosociological factors such as social skills, which are defined as problem-solving abilities in social interactions. This study aimed to reveal the relationships among pain, social skills, and other psychosociological factors by using structural equation modeling. A total of 101 healthy volunteers (41 men and 60 women; mean age: 36.6±12.7 years) participated in this study. To evoke participants' sense of inner pain, we showed them images of painful scenes on a PC screen and asked them to evaluate the pain intensity by using the visual analog scale (VAS). We examined the correlation between social skills and VAS, constructed a hypothetical model based on results from previous studies and the current correlational analysis results, and verified the model's fit using structural equation modeling. We found significant positive correlations between VAS and total social skills values, as well as between VAS and the "start of relationships" subscales. Structural equation modeling revealed that the values for "start of relationships" had a direct effect on VAS values (path coefficient =0.32, p social support. The results indicated that extroverted people are more sensitive to inner pain and tend to get more social support and maintain a better psychological condition.

  8. Adverse social experiences in adolescent rats result in enduring effects on social competence, pain sensitivity and endocannabinoid signaling

    Directory of Open Access Journals (Sweden)

    Peggy Schneider

    2016-10-01

    Full Text Available Social affiliation is essential for many species and gains significant importance during adolescence. Disturbances in social affiliation, in particular social rejection experiences during adolescence, affect an individual’s well-being and are involved in the emergence of psychiatric disorders. The underlying mechanisms are still unknown, partly because of a lack of valid animal models. By using a novel animal model for social peer-rejection, which compromises adolescent rats in their ability to appropriately engage in playful activities, here we report on persistent impairments in social behavior and dysregulations in the endocannabinoid system. From postnatal day (pd 21 to pd 50 adolescent female Wistar rats were either reared with same-strain partners (control or within a group of Fischer 344 rats (inadequate social rearing, ISR, previously shown to serve as inadequate play partners for the Wistar strain. Adult ISR animals showed pronounced deficits in social interaction, social memory, processing of socially transmitted information, and decreased pain sensitivity. Molecular analysis revealed increased CB1 receptor protein levels and CP55,940 stimulated 35SGTPγS binding activity specifically in the amygdala and thalamus in previously peer-rejected rats. Along with these changes, increased levels of the endocannabinoid anandamide and a corresponding decrease of its degrading enzyme fatty acid amide hydrolase were seen in the amygdala. Our data indicate lasting consequences in social behavior and pain sensitivity following peer-rejection in adolescent female rats. These behavioral impairments are accompanied by persistent alterations in CB1 receptor signaling. Finally, we provide a novel translational approach to characterize neurobiological processes underlying social peer-rejection in adolescence.

  9. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    Science.gov (United States)

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  10. An approach to measure parameter sensitivity in watershed hydrologic modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...

  11. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  12. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    Science.gov (United States)

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  13. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)

  14. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring sensitiv......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator...

  15. Gender-Sensitive Social Work Practice: A Model for Education.

    Science.gov (United States)

    Norman, Judith; Wheeler, Barbara

    1996-01-01

    Although women comprise the majority of social work clients, most psychological models of assessment and intervention are based on male psychological development. Feminist theories and therapies have turned attention to female development and its differences from male progression. A psychotherapeutic model for practice and education that allows…

  16. A sensitive venous bleeding model in haemophilia A mice

    DEFF Research Database (Denmark)

    Pastoft, Anne Engedahl; Lykkesfeldt, Jens; Ezban, M.

    2012-01-01

    Haemostatic effect of compounds for treating haemophilia can be evaluated in various bleeding models in haemophilic mice. However, the doses of factor VIII (FVIII) for normalizing bleeding used in some of these models are reported to be relatively high. The aim of this study was to establish a se...

  17. Atmospheric statistical dynamic models. Climate experiments: albedo experiments with a zonal atmospheric model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts

  18. Hindcasting to measure ice sheet model sensitivity to initial states

    Directory of Open Access Journals (Sweden)

    A. Aschwanden

    2013-07-01

    Full Text Available Validation is a critical component of model development, yet notoriously challenging in ice sheet modeling. Here we evaluate how an ice sheet system model responds to a given forcing. We show that hindcasting, i.e. forcing a model with known or closely estimated inputs for past events to see how well the output matches observations, is a viable method of assessing model performance. By simulating the recent past of Greenland, and comparing to observations of ice thickness, ice discharge, surface speeds, mass loss and surface elevation changes for validation, we find that the short term model response is strongly influenced by the initial state. We show that the thermal and dynamical states (i.e. the distribution of internal energy and momentum can be misrepresented despite a good agreement with some observations, stressing the importance of using multiple observations. In particular we identify rates of change of spatially dense observations as preferred validation metrics. Hindcasting enables a qualitative assessment of model performance relative to observed rates of change. It thereby reduces the number of admissible initial states more rigorously than validation efforts that do not take advantage of observed rates of change.

  19. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    OpenAIRE

    Marrero, C.; Jorba, O.; Cuevas, E.; Baldasano, J. M.

    2009-01-01

    In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain). The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW). A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, d...

  20. STRESS MODELING IN COMPOSITE PRODUCTS USING STANDARD OPTICALLY SENSITIVE MATERIAL

    Directory of Open Access Journals (Sweden)

    Elifkhan K. Agakhanov

    2017-01-01

    Full Text Available Abstract. Objectives The problem of physically modelling stresses in a compound solid body of revolution having a complex shape and with a complex load distribution is considered. According to the similarity criteria of stress, deformations and displacements from the volume forces decrease proportionally to the scale of similarity of geometric dimensions, which complicates their direct modelling by the photoelasticity method typically using models made from epoxy materials. Methods Based on the principle of the independent action of the forces, the initial problem is represented as the sum of two problems. In the first uniform problem, the stresses in the body of revolution from the centrifugal forces are simulated by the conventional “freezing” method. In order to solve the second nonuniform problem, the stresses in the region of the model, corresponding to the acting centrifugal forces, are “frozen”. The models are glued in a natural state at room temperature, and the compound model is annealed. Results The band patterns in sections as well as components of radial, tangential and axial stresses on contours and in sections of models are obtained by the methods of normal transmission and numerical integration of the equilibrium equation. According to the modelling criteria, the formula for the transition from stresses in models to stresses in the natural structure is established. The results of the analysis of the effect of a body's material density ratio on the stress state of the entire structure are obtained. Conclusion  Axial stresses have insignificant value as compared to radial and tangential stresses; in addition, the ratio of the densities of the compound body has both a quantitative and qualitative influence on the stress state of the structure.

  1. Sensitivity of the Gravity Recovery and Climate Experiment (GRACE) to the complexity of aquifer systems for monitoring of groundwater

    Science.gov (United States)

    Katpatal, Yashwant B.; Rishma, C.; Singh, Chandan K.

    2018-05-01

    The Gravity Recovery and Climate Experiment (GRACE) satellite mission is aimed at assessment of groundwater storage under different terrestrial conditions. The main objective of the presented study is to highlight the significance of aquifer complexity to improve the performance of GRACE in monitoring groundwater. Vidarbha region of Maharashtra, central India, was selected as the study area for analysis, since the region comprises a simple aquifer system in the western region and a complex aquifer system in the eastern region. Groundwater-level-trend analyses of the different aquifer systems and spatial and temporal variation of the terrestrial water storage anomaly were studied to understand the groundwater scenario. GRACE and its field application involve selecting four pixels from the GRACE output with different aquifer systems, where each GRACE pixel encompasses 50-90 monitoring wells. Groundwater storage anomalies (GWSA) are derived for each pixel for the period 2002 to 2015 using the Release 05 (RL05) monthly GRACE gravity models and the Global Land Data Assimilation System (GLDAS) land-surface models (GWSAGRACE) as well as the actual field data (GWSAActual). Correlation analysis between GWSAGRACE and GWSAActual was performed using linear regression. The Pearson and Spearman methods show that the performance of GRACE is good in the region with simple aquifers; however, performance is poorer in the region with multiple aquifer systems. The study highlights the importance of incorporating the sensitivity of GRACE in estimation of groundwater storage in complex aquifer systems in future studies.

  2. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  3. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  4. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  5. Explicit modelling of SOA formation from α-pinene photooxidation: sensitivity to vapour pressure estimation

    Directory of Open Access Journals (Sweden)

    R. Valorso

    2011-07-01

    Full Text Available The sensitivity of the formation of secondary organic aerosol (SOA to the estimated vapour pressures of the condensable oxidation products is explored. A highly detailed reaction scheme was generated for α-pinene photooxidation using the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A. Vapour pressures (Pvap were estimated with three commonly used structure activity relationships. The values of Pvap were compared for the set of secondary species generated by GECKO-A to describe α-pinene oxidation. Discrepancies in the predicted vapour pressures were found to increase with the number of functional groups borne by the species. For semi-volatile organic compounds (i.e. organic species of interest for SOA formation, differences in the predicted Pvap range between a factor of 5 to 200 on average. The simulated SOA concentrations were compared to SOA observations in the Caltech chamber during three experiments performed under a range of NOx conditions. While the model captures the qualitative features of SOA formation for the chamber experiments, SOA concentrations are systematically overestimated. For the conditions simulated, the modelled SOA speciation appears to be rather insensitive to the Pvap estimation method.

  6. Data on the experiments of temperature-sensitive hydrogels for pH-sensitive drug release and the characterizations of materials

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2018-04-01

    Full Text Available This article contains experimental data on the strain sweep, the calibration curve of drug (doxorubicin, DOX and the characterizations of materials. Data included are related to the research article “Injectable and body temperature sensitive hydrogels based on chitosan and hyaluronic acid for pH sensitive drug release” (Zhang et al., 2017 [1]. The strain sweep experiments were performed on a rotational rheometer. The calibration curves were obtained by analyzing the absorbance of DOX solutions on a UV–vis-NIR spectrometer. Molecular weight (Mw of the hyaluronic acid (HA and chitosan (CS were determined by gel permeation chromatography (GPC. The deacetylation degree of CS was measured by acid base titration.

  7. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Science.gov (United States)

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  8. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.; Navarro, Marí a; Merks, Roeland M. H.; Blom, Joke G.

    2015-01-01

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand

  9. Heuristic Sensitivity Analysis for Baker's Yeast Model Parameters

    OpenAIRE

    Leão, Celina P.; Soares, Filomena O.

    2004-01-01

    The baker's yeast, essentially composed by living cells of Saccharomyces cerevisiae, used in the bread making and beer industries as a microorganism, has an important industrial role. The simulation procedure represents then a necessary tool to understand clearly the baker's yeast fermentation process. The use of mathematical models based on mass balance equations requires the knowledge of the reaction kinetics, thermodynamics, and transport and physical properties. Models may be more or less...

  10. Sensitivity of ultracold-atom scattering experiments to variation of the fine-structure constant

    International Nuclear Information System (INIS)

    Borschevsky, A.; Beloy, K.; Flambaum, V. V.; Schwerdtfeger, P.

    2011-01-01

    We present numerical calculations for cesium and mercury to estimate the sensitivity of the scattering length to the variation of the fine-structure constant α. The method used follows the ideas of Chin and Flambaum [Phys. Rev. Lett. 96, 230801 (2006)], where the sensitivity to the variation of the electron-to-proton mass ratio β was considered. We demonstrate that for heavy systems, the sensitivity to the variation of α is of the same order of magnitude as to the variation of β. Near narrow Feshbach resonances, the enhancement of the sensitivity may exceed nine orders of magnitude.

  11. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  12. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  13. Sensitivity Analysis of b-factor in Microwave Emission Model for Soil Moisture Retrieval: A Case Study for SMAP Mission

    Directory of Open Access Journals (Sweden)

    Dugwon Seo

    2010-05-01

    Full Text Available Sensitivity analysis is critically needed to better understand the microwave emission model for soil moisture retrieval using passive microwave remote sensing data. The vegetation b-factor along with vegetation water content and surface characteristics has significant impact in model prediction. This study evaluates the sensitivity of the b-factor, which is function of vegetation type. The analysis is carried out using Passive and Active L and S-band airborne sensor (PALS and measured field soil moisture from Southern Great Plains experiment (SGP99. The results show that the relative sensitivity of the b-factor is 86% in wet soil condition and 88% in high vegetated condition compared to the sensitivity of the soil moisture. Apparently, the b-factor is found to be more sensitive than the vegetation water content, surface roughness and surface temperature; therefore, the effect of the b-factor is fairly large to the microwave emission in certain conditions. Understanding the dependence of the b-factor on the soil and vegetation is important in studying the soil moisture retrieval algorithm, which can lead to potential improvements in model development for the Soil Moisture Active-Passive (SMAP mission.

  14. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    International Nuclear Information System (INIS)

    Cassardo, C.; Carena, E.; Longhetto, A.

    1998-01-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type

  15. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  16. Subjective Experiences and Sensitivities in Women with Fibromyalgia: A Quantitative and Comparative Study

    Directory of Open Access Journals (Sweden)

    P. De Roa

    2018-01-01

    Full Text Available Fibromyalgia is a chronic widespread pain syndrome associated with chronic fatigue. Its pathogenesis is not clearly understood. This study presents subjective experiences and sensitivities reported by fibromyalgia patients, which should be considered in primary care to avoid medical nomadism, as well as stigmatization of the patients. The prevalence of significant characteristics was compared with others patients consulting at the same pain unit who suffer from rebel and disabling form of chronic migraine. Psychometric tests were anonymously completed by 78 patients of the Pain Unit (44 fibromyalgia patients and 34 migraine patients. Tests evaluated pain (Visual Analog scale, childhood traumas (Childhood Trauma Questionnaire, lack of parental affection, stressful life events (Holmes and Rahe Scale, anxiety and depression (Hospital Anxiety and Depression Scale, perceived hypersensitivity to 10 stimuli, and hyperactivity before illness. However, pain scores were comparable in the two groups, and the prevalence was significantly higher in fibromyalgia patients than in migraine patients for anxiety (81.8% versus 51.5% and depression (57.1% versus 8.8%. Childhood physical abuses were more frequently reported in fibromyalgia than in migraine cases (25% versus 3%. Similarly, the feeling of lack of parental affection, subjective hypersensitivity to stress and stimuli (cold, moisture, heat, full moon, and flavors or hyperactivity (ergomania, appeared as prominent features of fibromyalgia patients. Fibromyalgia patients considered themselves as being hypersensitive (mentally and physically compared to migraine patients. They also have higher depression levels. Beyond somatic symptoms, precociously taking account of psychosocial and behavioral strategies would highly improve treatment efficiency of the fibromyalgia syndrome.

  17. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    effects and the one-at-a-time approach (O.A.T); and (ii), we applied Sobol's global sensitivity analysis method which is based on variance decompositions. Results illustrate that ψm (maximum sorption rate of mobile colloids), kdmc (solute desorption rate from mobile colloids), and Ks (saturated hydraulic conductivity) are the most sensitive parameters with respect to the contaminant travel time. The analyses indicate that this new module is able to simulate the colloid-facilitated contaminant transport. However, validations under laboratory conditions are needed to confirm the occurrence of the colloid transport phenomenon and to understand model prediction under non-saturated soil conditions. Future work will involve monitoring of the colloidal transport phenomenon through soil column experiments. The anticipated outcome will provide valuable information on the understanding of the dominant mechanisms responsible for colloidal transports, colloid-facilitated contaminant transport and, also, the colloid detachment/deposition processes impacts on soil hydraulic properties. References: Šimůnek, J., C. He, L. Pang, & S. A. Bradford, Colloid-Facilitated Solute Transport in Variably Saturated Porous Media: Numerical Model and Experimental Verification, Vadose Zone Journal, 2006, 5, 1035-1047 Šimůnek, J., M. Šejna, & M. Th. van Genuchten, The C-Ride Module for HYDRUS (2D/3D) Simulating Two-Dimensional Colloid-Facilitated Solute Transport in Variably-Saturated Porous Media, Version 1.0, PC Progress, Prague, Czech Republic, 45 pp., 2012.

  18. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  19. Sensitivity of a complex urban air quality model to input data

    International Nuclear Information System (INIS)

    Seigneur, C.; Tesche, T.W.; Roth, P.M.; Reid, L.E.

    1981-01-01

    In recent years, urban-scale photochemical simulation models have been developed that are of practical value for predicting air quality and analyzing the impacts of alternative emission control strategies. Although the performance of some urban-scale models appears to be acceptable, the demanding data requirements of such models have prompted concern about the costs of data acquistion, which might be high enough to preclude use of photochemical models for many urban areas. To explore this issue, sensitivity studies with the Systems Applications, Inc. (SAI) Airshed Model, a grid-based time-dependent photochemical dispersion model, have been carried out for the Los Angeles basin. Reductions in the amount and quality of meteorological, air quality and emission data, as well as modifications of the model gridded structure, have been analyzed. This paper presents and interprets the results of 22 sensitivity studies. A sensitivity-uncertainty index is defined to rank input data needs for an urban photochemical model. The index takes into account the sensitivity of model predictions to the amount of input data, the costs of data acquistion, and the uncertainties in the air quality model input variables. The results of these sensitivity studies are considered in light of the limitations of specific attributes of the Los Angeles basin and of the modeling conditions (e.g., choice of wind model, length of simulation time). The extent to which the results may be applied to other urban areas also is discussed

  20. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  1. Validation of ASTEC v2.0 corium jet fragmentation model using FARO experiments

    International Nuclear Information System (INIS)

    Hermsmeyer, S.; Pla, P.; Sangiorgi, M.

    2015-01-01

    Highlights: • Model validation base extended to six FARO experiments. • Focus on the calculation of the fragmented particle diameter. • Capability and limits of the ASTEC fragmentation model. • Sensitivity analysis of model outputs. - Abstract: ASTEC is an integral code for the prediction of Severe Accidents in Nuclear Power Plants. As such, it needs to cover all physical processes that could occur during accident progression, yet keeping its models simple enough for the ensemble to stay manageable and produce results within an acceptable time. The present paper is concerned with the validation of the Corium jet fragmentation model of ASTEC v2.0 rev3 by means of a selection of six experiments carried out within the FARO facility. The different conditions applied within these six experiments help to analyse the model behaviour in different situations and to expose model limits. In addition to comparing model outputs with experimental measurements, sensitivity analyses are applied to investigate the model. Results of the paper are (i) validation runs, accompanied by an identification of situations where the implemented fragmentation model does not match the experiments well, and discussion of results; (ii) its special attention to the models calculating the diameter of fragmented particles, the identification of a fault in one model implemented, and the discussion of simplification and ad hoc modification to improve the model fit; and, (iii) an investigation of the sensitivity of predictions towards inputs and parameters. In this way, the paper offers a thorough investigation of the merit and limitation of the fragmentation model used in ASTEC

  2. The sensitizing phenomenon of X-ray film in the experiment of metals loaded with deuterium

    International Nuclear Information System (INIS)

    Chen Suhe; Wang Dalun; Chen Wenjang; Li Yijun; Fu Yibei; Zhang Xinwei

    1993-01-01

    The sensitizing phenomenon of X-ray film was studied, in metals loaded with deuterium, by a cycle method of temperature and pressure (CMTP). The experimental results showed that the sensitization of X-ray film was derived from the chemical reaction and the anomalous effect of metals loaded with deuterium. (author)

  3. Evaluating Weather Research and Forecasting Model Sensitivity to Land and Soil Conditions Representative of Karst Landscapes

    Science.gov (United States)

    Johnson, Christopher M.; Fan, Xingang; Mahmood, Rezaul; Groves, Chris; Polk, Jason S.; Yan, Jun

    2018-03-01

    Due to their particular physiographic, geomorphic, soil cover, and complex surface-subsurface hydrologic conditions, karst regions produce distinct land-atmosphere interactions. It has been found that floods and droughts over karst regions can be more pronounced than those in non-karst regions following a given rainfall event. Five convective weather events are simulated using the Weather Research and Forecasting model to explore the potential impacts of land-surface conditions on weather simulations over karst regions. Since no existing weather or climate model has the ability to represent karst landscapes, simulation experiments in this exploratory study consist of a control (default land-cover/soil types) and three land-surface conditions, including barren ground, forest, and sandy soils over the karst areas, which mimic certain karst characteristics. Results from sensitivity experiments are compared with the control simulation, as well as with the National Centers for Environmental Prediction multi-sensor precipitation analysis Stage-IV data, and near-surface atmospheric observations. Mesoscale features of surface energy partition, surface water and energy exchange, the resulting surface-air temperature and humidity, and low-level instability and convective energy are analyzed to investigate the potential land-surface impact on weather over karst regions. We conclude that: (1) barren ground used over karst regions has a pronounced effect on the overall simulation of precipitation. Barren ground provides the overall lowest root-mean-square errors and bias scores in precipitation over the peak-rain periods. Contingency table-based equitable threat and frequency bias scores suggest that the barren and forest experiments are more successful in simulating light to moderate rainfall. Variables dependent on local surface conditions show stronger contrasts between karst and non-karst regions than variables dominated by large-scale synoptic systems; (2) significant

  4. Sensitivity Analysis in Structural Equation Models: Cases and Their Influence

    Science.gov (United States)

    Pek, Jolynn; MacCallum, Robert C.

    2011-01-01

    The detection of outliers and influential observations is routine practice in linear regression. Despite ongoing extensions and development of case diagnostics in structural equation models (SEM), their application has received limited attention and understanding in practice. The use of case diagnostics informs analysts of the uncertainty of model…

  5. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    Science.gov (United States)

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  6. Towards a glacial‐sensitive model of island biogeography

    NARCIS (Netherlands)

    Fernández-Palacios, J.M.; Rijsdijk, K.F.; Norder, S.J.; Otto, R.; de Nascimento, L.; Fernández‐Lugo, S.; Tjørve, E.; Whittaker, R.J.

    2016-01-01

    Although the role that Pleistocene glacial cycles have played in shaping the present biota of oceanic islands world-wide has long been recognized, their geographical, biogeographical and ecological implications have not yet been fully incorporated within existing biogeographical models. Here we

  7. A global sensitivity analysis approach for morphogenesis models

    NARCIS (Netherlands)

    S.E.M. Boas (Sonja); M.I. Navarro Jimenez (Maria); R.M.H. Merks (Roeland); J.G. Blom (Joke)

    2015-01-01

    textabstract{\\bf Background} %if any Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the

  8. A duopoly model with heterogeneous congestion-sensitive customers.

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Timmer, J.

    2007-01-01

    Abstract This paper analyzes a model with two firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards ‘congestion’ (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the

  9. A duopoly model with heterogeneous congestion-sensitive customers

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Timmer, Judith B.

    This paper analyzes a model with two firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards ‘congestion’ (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the firms, and

  10. A duopoly model with heterogeneous congestion-sensitive customers

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Timmer, Judith B.

    2003-01-01

    This paper analyzes a model with multiple firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards `congestion' (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the firms,

  11. Modeling Li I and K I sensitivity to Pleiades activity

    NARCIS (Netherlands)

    Stuik, R.; Bruls, J.H.M.J.; Rutten, R.J.

    1996-01-01

    We compare schematic modeling of spots and plage on the surface of cool dwarfs with Pleiades data to assess effects of magnetic activity on the strengths of the L II and K I resonance lines in Pleiades spectra. Comprehensive L II and K I NLTE line formation computation is combined with comparatively

  12. Experience modulates both aromatase activity and the sensitivity of agonistic behaviour to testosterone in black-headed gulls

    NARCIS (Netherlands)

    Ros, Albert F. H.; Franco, Aldina M. A.; Groothuis, Ton G. G.

    2009-01-01

    In young black-headed gulls (Larus ridibundus), exposure to testosterone increases the sensitivity of agonistic behaviour to a subsequent exposure to this hormone. The aim of this paper is twofold: to analyze whether social experience, gained during testosterone exposure, mediates this increase in

  13. Reflexive Positioning in a Politically Sensitive Situation: Dealing with the Threats of Researching the West Bank Settler Experience

    Science.gov (United States)

    Possick, Chaya

    2009-01-01

    For the past 7 years, the author has conducted qualitative research projects revolving around the experiences of West Bank settlers. The political situation in Israel in general, and the West Bank in particular, has undergone rapid and dramatic political, military, and social changes during this period. In highly politically sensitive situations…

  14. Eocene climate and Arctic paleobathymetry: A tectonic sensitivity study using GISS ModelE-R

    Science.gov (United States)

    Roberts, C. D.; Legrande, A. N.; Tripati, A. K.

    2009-12-01

    The early Paleogene (65-45 million years ago, Ma) was a ‘greenhouse’ interval with global temperatures warmer than any other time in the last 65 Ma. This period was characterized by high levels of CO2, warm high-latitudes, warm surface-and-deep oceans, and an intensified hydrological cycle. Sediments from the Arctic suggest that the Eocene surface Arctic Ocean was warm, brackish, and episodically enabled the freshwater fern Azolla to bloom. The precise mechanisms responsible for the development of these conditions remain uncertain. We present equilibrium climate conditions derived from a fully-coupled, water-isotope enabled, general circulation model (GISS ModelE-R) configured for the early Eocene. We also present model-data comparison plots for key climatic variables (SST and δ18O) and analyses of the leading modes of variability in the tropical Pacific and North Atlantic regions. Our tectonic sensitivity study indicates that Northern Hemisphere climate would have been very sensitive to the degree of oceanic exchange through the seaways connecting the Arctic to the Atlantic and Tethys. By restricting these seaways, we simulate freshening of the surface Arctic Ocean to ~6 psu and warming of sea-surface temperatures by 2°C in the North Atlantic and 5-10°C in the Labrador Sea. Our results may help explain the occurrence of low-salinity tolerant taxa in the Arctic Ocean during the Eocene and provide a mechanism for enhanced warmth in the north western Atlantic. We also suggest that the formation of a volcanic land-bridge between Greenland and Europe could have caused increased ocean convection and warming of intermediate waters in the Atlantic. If true, this result is consistent with the theory that bathymetry changes may have caused thermal destabilisation of methane clathrates in the Atlantic.

  15. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization

    Science.gov (United States)

    Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2016-01-01

    Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin sensitization responses for a set of 135 unique chemicals was low (R = 28-43%), although several chemical classes had high concordance. We have succeeded to develop predictive QSAR models of all available human data with the external correct classification rate of 71%. A consensus model integrating concordant QSAR predictions and LLNA results afforded a higher CCR of 82% but at the expense of the reduced external dataset coverage (52%). We used the developed QSAR models for virtual screening of CosIng database and identified 1061 putative skin sensitizers; for seventeen of these compounds, we found published evidence of their skin sensitization effects. Models reported herein provide more accurate alternative to LLNA testing for human skin sensitization assessment across diverse chemical data. In addition, they can also be used to guide the structural optimization of toxic compounds to reduce their skin sensitization potential. PMID:28630595

  16. Computer models versus reality: how well do in silico models currently predict the sensitization potential of a substance.

    Science.gov (United States)

    Teubner, Wera; Mehling, Anette; Schuster, Paul Xaver; Guth, Katharina; Worth, Andrew; Burton, Julien; van Ravenzwaay, Bennard; Landsiedel, Robert

    2013-12-01

    National legislations for the assessment of the skin sensitization potential of chemicals are increasingly based on the globally harmonized system (GHS). In this study, experimental data on 55 non-sensitizing and 45 sensitizing chemicals were evaluated according to GHS criteria and used to test the performance of computer (in silico) models for the prediction of skin sensitization. Statistic models (Vega, Case Ultra, TOPKAT), mechanistic models (Toxtree, OECD (Q)SAR toolbox, DEREK) or a hybrid model (TIMES-SS) were evaluated. Between three and nine of the substances evaluated were found in the individual training sets of various models. Mechanism based models performed better than statistical models and gave better predictivities depending on the stringency of the domain definition. Best performance was achieved by TIMES-SS, with a perfect prediction, whereby only 16% of the substances were within its reliability domain. Some models offer modules for potency; however predictions did not correlate well with the GHS sensitization subcategory derived from the experimental data. In conclusion, although mechanistic models can be used to a certain degree under well-defined conditions, at the present, the in silico models are not sufficiently accurate for broad application to predict skin sensitization potentials. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  18. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  19. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    Directory of Open Access Journals (Sweden)

    C. Marrero

    2009-04-01

    Full Text Available In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain. The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW. A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, definition or not of nested grids, horizontal resolution and number of vertical levels. The Factor Separation Method was applied in order to identify the major model sensitivity parameters under this unusual meteorological situation. Results associated to percentage changes relatives to a control run simulation demonstrated that boundary layer and surface layer schemes, horizontal resolutions, hydrostaticity option and nesting grid activation were the model configuration parameters with the greatest impact on the 48 h maximum 10 m horizontal wind speed solution.

  20. A Methodology for Modeling Confined, Temperature Sensitive Cushioning Systems

    Science.gov (United States)

    1980-06-01

    thickness of cushion T, and®- s temperature 0, and as a dependent variable, G, the peak acceleration. The initial model, Equation (IV-11), proved deficient ...k9) = TR * TCTH ALV(60) = Tk * TCTH AL2 V6)= Tk2 * FCTH V2 =TRk * TCrFH *AL V(6~3) =THZ * TC.TH AU! V(,34) =TRa * TCTH 141 Yj)=Tks * T(-Th * AL V(.4b

  1. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    Science.gov (United States)

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    Science.gov (United States)

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  3. Sensitivity Analysis of an Agent-Based Model of Culture's Consequences for Trade

    NARCIS (Netherlands)

    Burgers, S.L.G.E.; Jonker, C.M.; Hofstede, G.J.; Verwaart, D.

    2010-01-01

    This paper describes the analysis of an agent-based model’s sensitivity to changes in parameters that describe the agents’ cultural background, relational parameters, and parameters of the decision functions. As agent-based models may be very sensitive to small changes in parameter values, it is of

  4. Predicted Infiltration for Sodic/Saline Soils from Reclaimed Coastal Areas: Sensitivity to Model Parameters

    Directory of Open Access Journals (Sweden)

    Dongdong Liu

    2014-01-01

    Full Text Available This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline and 1960 (Soil B, nonsaline were used, with bulk densities of 1.4 or 1.5 g/cm3. A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ0 was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  5. Sensitivity study of cloud/radiation interaction using a second order turbulence radiative-convective model

    International Nuclear Information System (INIS)

    Kao, C.Y.J.; Smith, W.S.

    1993-01-01

    A high resolution one-dimensional version of a second order turbulence convective/radiative model, developed at the Los Alamos National Laboratory, was used to conduct a sensitivity study of a stratocumulus cloud deck, based on data taken at San Nicolas Island during the intensive field observation marine stratocumulus phase of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE IFO), conducted during July, 1987. Initial profiles for liquid water potential temperature, and total water mixing ratio were abstracted from the FIRE data. The dependence of the diurnal behavior in liquid water content, cloud top height, and cloud base height were examined for variations in subsidence rate, sea surface temperature, and initial inversion strength. The modelled diurnal variation in the column integrated liquid water agrees quite well with the observed data, for the case of low subsidence. The modelled diurnal behavior for the height of the cloud top and base show qualitative agreement with the FIRE data, although the overall height of the cloud layer is about 200 meters too high

  6. Predicted infiltration for sodic/saline soils from reclaimed coastal areas: sensitivity to model parameters.

    Science.gov (United States)

    Liu, Dongdong; She, Dongli; Yu, Shuang'en; Shao, Guangcheng; Chen, Dan

    2014-01-01

    This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline) and 1960 (Soil B, nonsaline) were used, with bulk densities of 1.4 or 1.5 g/cm(3). A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ₀ was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  7. Sleep fragmentation exacerbates mechanical hypersensitivity and alters subsequent sleep-wake behavior in a mouse model of musculoskeletal sensitization.

    Science.gov (United States)

    Sutton, Blair C; Opp, Mark R

    2014-03-01

    Sleep deprivation, or sleep disruption, enhances pain in human subjects. Chronic musculoskeletal pain is prevalent in our society, and constitutes a tremendous public health burden. Although preclinical models of neuropathic and inflammatory pain demonstrate effects on sleep, few studies focus on musculoskeletal pain. We reported elsewhere in this issue of SLEEP that musculoskeletal sensitization alters sleep of mice. In this study we hypothesize that sleep fragmentation during the development of musculoskeletal sensitization will exacerbate subsequent pain responses and alter sleep-wake behavior of mice. This is a preclinical study using C57BL/6J mice to determine the effect on behavioral outcomes of sleep fragmentation combined with musculoskeletal sensitization. Musculoskeletal sensitization, a model of chronic muscle pain, was induced using two unilateral injections of acidified saline (pH 4.0) into the gastrocnemius muscle, spaced 5 days apart. Musculoskeletal sensitization manifests as mechanical hypersensitivity determined by von Frey filament testing at the hindpaws. Sleep fragmentation took place during the consecutive 12-h light periods of the 5 days between intramuscular injections. Electroencephalogram (EEG) and body temperature were recorded from some mice at baseline and for 3 weeks after musculoskeletal sensitization. Mechanical hypersensitivity was determined at preinjection baseline and on days 1, 3, 7, 14, and 21 after sensitization. Two additional experiments were conducted to determine the independent effects of sleep fragmentation or musculoskeletal sensitization on mechanical hypersensitivity. Five days of sleep fragmentation alone did not induce mechanical hypersensitivity, whereas sleep fragmentation combined with musculoskeletal sensitization resulted in prolonged and exacerbated mechanical hypersensitivity. Sleep fragmentation combined with musculoskeletal sensitization had an effect on subsequent sleep of mice as demonstrated by increased

  8. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Science.gov (United States)

    Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda

    2016-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102

  9. The influence of cirrus cloud-radiative forcing on climate and climate sensitivity in a general circulation model

    International Nuclear Information System (INIS)

    Lohmann, U.; Roeckner, E.

    1994-01-01

    Six numerical experiments have been performed with a general circulation model (GCM) to study the influence of high-level cirrus clouds and global sea surface temperature (SST) perturbations on climate and climate sensitivity. The GCM used in this investigation is the third-generation ECHAM3 model developed jointly by the Max-Planck-Institute for Meteorology and the University of Hamburg. It is shown that the model is able to reproduce many features of the observed cloud-radiative forcing with considerable skill, such as the annual mean distribution, the response to seasonal forcing and the response to observed SST variations in the equatorial Pacific. In addition to a reference experiment where the cirrus emissivity is computed as a function of the cloud water content, two sensitivity experiments have been performed in which the cirrus emissivity is either set to zero everywhere above 400 hPa ('transparent cirrus') or set to one ('black cirrus'). These three experiments are repeated identically, except for prescribing a globally uniform SST warming of 4 K. (orig.)

  10. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    Science.gov (United States)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a

  11. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  12. Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    Jianbin Guo

    2014-07-01

    Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.

  13. Sensitivity to plant modelling uncertainties in optimal feedback control of sound radiation from a panel

    DEFF Research Database (Denmark)

    Mørkholt, Jakob

    1997-01-01

    Optimal feedback control of broadband sound radiation from a rectangular baffled panel has been investigated through computer simulations. Special emphasis has been put on the sensitivity of the optimal feedback control to uncertainties in the modelling of the system under control.A model...... in terms of a set of radiation filters modelling the radiation dynamics.Linear quadratic feedback control applied to the panel in order to minimise the radiated sound power has then been simulated. The sensitivity of the model based controller to modelling uncertainties when using feedback from actual...

  14. The Psychological Essence of the Child Prodigy Phenomenon: Sensitive Periods and Cognitive Experience.

    Science.gov (United States)

    Shavinina, Larisa V.

    1999-01-01

    Examination of the child prodigy phenomenon suggests it is a result of extremely accelerated mental development during sensitive periods that leads to the rapid growth of a child's cognitive resources and their construction into specific exceptional achievements. (Author/DB)

  15. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    International Nuclear Information System (INIS)

    Alhossen, I; Bugarin, F; Segonds, S; Villeneuve-Faure, C; Baudoin, F

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC. (paper)

  16. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    Science.gov (United States)

    Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.

  17. Photo-induced charge transfer at heterogeneous interfaces: Dye-sensitized tin disulfide, the theory and the experiment

    International Nuclear Information System (INIS)

    Lanzafame, J.M.

    1993-01-01

    The study of photo-induced charge transfer is an endeavor that spans the entire industrial period of man's history. Its great importance demands an ever greater understanding of its underlying principles. The work discussed here attempts to probe elementary aspects of the charge transfer process. Investigations into the theory of charge transfer reactions are made in an attempt to isolate the relevant parameters. An analytical discussion is made of a simple Golden Rule type rate equation to describe the transfer kinetics. Then a quantum simulation is carried out to follow the wavefunction propagation as a test of the applicability of the assumptions made in deriving the simpler rate equation. Investigation of charge transfer at surfaces is bet served by the application of ultrafast optical spectroscopies to probe carrier dynamics. A discussion of the properties of the short pulse laser systems employed is included along with a discussion of the different optical spectroscopies available. These tools are then brought to bear upon dye-sensitized SnS 2 , a model system for the study of charge injection processes. The unique properties of the semiconductor are discussed with respect to the charge transfer process. The unique properties of the semiconductor are discussed with respect to the charge transfer process. The optical experiments performed on the dye/SnS 2 systems elucidate the fundamental carrier dynamics and these dynamics are discussed within the theoretical framework to provide a complete picture of the charge transfer kinetics

  18. Application of pressure-sensitive paint in shock-boundary layer interaction experiments

    OpenAIRE

    Seivwright, Douglas L.

    1996-01-01

    Approved for public release; distribution is unlimited A new type of pressure transducer, pressure-sensitive paint, was used to obtain pressure distributions associated with shock-boundary layer interaction. Based on the principle of photoluminescence and the process of oxygen quenching, pressure-sensitive paint provides a continous mapping of a pressure field over a surface of interest. The data measurement and acquisition system developed for use with the photoluminescence sensor was eva...

  19. A pain in the bud? Implications of cross-modal sensitivity for pain experience.

    Science.gov (United States)

    Perkins, Monica; de Bruyne, Marien; Giummarra, Melita J

    2016-11-01

    There is growing evidence that enhanced sensitivity to painful clinical procedures and chronic pain are related to greater sensitivity to other sensory inputs, such as bitter taste. We examined cross-modal sensitivities in two studies. Study 1 assessed associations between bitter taste sensitivity, pain tolerance, and fear of pain in 48 healthy young adults. Participants were classified as non-tasters, tasters and super-tasters using a bitter taste test (6-n-propythiouracil; PROP). The latter group had significantly higher fear of pain (Fear of Pain Questionnaire) than tasters (p=.036, effect size r = .48). There was only a trend for an association between bitter taste intensity ratings and intensity of pain at the point of pain tolerance in a cold pressor test (p=.04). In Study 2, 40 healthy young adults completed the Adolescent/Adult Sensory Profile before rating intensity and unpleasantness of innocuous (33 °C), moderate (41 °C), and high intensity (44 °C) thermal pain stimulations. The sensory-sensitivity subscale was positively correlated with both intensity and unpleasantness ratings. Canonical correlation showed that only sensitivity to audition and touch (not taste/smell) were associated with intensity of moderate and high (not innocuous) thermal stimuli. Together these findings suggest that there are cross-modal associations predominantly between sensitivity to exteroceptive inputs (i.e., taste, touch, sound) and the affective dimensions of pain, including noxious heat and intolerable cold pain, in healthy adults. These cross-modal sensitivities may arise due to greater psychological aversion to salient sensations, or from shared neural circuitry for processing disparate sensory modalities.

  20. Experiments and modeling of single plastic particle conversion in suspension

    DEFF Research Database (Denmark)

    Nakhaei, Mohammadhadi; Wu, Hao; Grévain, Damien

    2018-01-01

    Conversion of single high density polyethylene (PE) particles has been studied by experiments and modeling. The experiments were carried out in a single particle combustor for five different shapes and masses of particles at temperature conditions of 900 and 1100°C. Each experiment was recorded...... against the experiments as well as literature data. Furthermore, a simplified isothermal model appropriate for CFD applications was developed, in order to model the combustion of plastic particles in cement calciners. By comparing predictions with the isothermal and the non–isothermal models under typical...

  1. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja; Navarro, Marí a; Merks, Roeland; Blom, Joke

    2015-01-01

    ) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided

  2. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  3. Twentieth century Walker Circulation change: data analysis and model experiments

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Qingjia [Leibniz-Institut fuer Meereswissenschaften, Kiel (Germany); Chinese Research Academy of Environmental Sciences, River and Coastal Environment Research Center, Beijing (China); Chinese Academy of Sciences, Key Laboratory of Ocean Circulation and Waves, Institute of Oceanology, Qingdao (China); Latif, Mojib; Park, Wonsun; Keenlyside, Noel S.; Martin, Thomas [Leibniz-Institut fuer Meereswissenschaften, Kiel (Germany); Semenov, Vladimir A. [Leibniz-Institut fuer Meereswissenschaften, Kiel (Germany); A.M. Obukhov Institute of Atmospheric Physics, Russian Academy of Sciences, Moscow (Russian Federation)

    2012-05-15

    Recent studies indicate a weakening of the Walker Circulation during the twentieth century. Here, we present evidence from an atmospheric general circulation model (AGCM) forced by the history of observed sea surface temperature (SST) that the Walker Circulation may have intensified rather than weakened. Observed Equatorial Indo-Pacific Sector SST since 1870 exhibited a zonally asymmetric evolution: While the eastern part of the Equatorial Pacific showed only a weak warming, or even cooling in one SST dataset, the western part and the Equatorial Indian Ocean exhibited a rather strong warming. This has resulted in an increase of the SST gradient between the Maritime Continent and the eastern part of the Equatorial Pacific, one driving force of the Walker Circulation. The ensemble experiments with the AGCM, with and without time-varying external forcing, suggest that the enhancement of the SST gradient drove an anomalous atmospheric circulation, with an enhancement of both Walker and Hadley Circulation. Anomalously strong precipitation is simulated over the Indian Ocean and anomalously weak precipitation over the western Pacific, with corresponding changes in the surface wind pattern. Some sensitivity to the forcing SST, however, is noticed. The analysis of twentieth century integrations with global climate models driven with observed radiative forcing obtained from the Coupled Model Intercomparison Project (CMIP) database support the link between the SST gradient and Walker Circulation strength. Furthermore, control integrations with the CMIP models indicate the existence of strong internal variability on centennial timescales. The results suggest that a radiatively forced signal in the Walker Circulation during the twentieth century may have been too weak to be detectable. (orig.)

  4. Experiments and Modelling of Coal Devolatilization

    Institute of Scientific and Technical Information of China (English)

    QiuKuanrong; LiuQianxin

    1994-01-01

    The coal devolatilization process of different coals was studied by means of thermogravimetric analysis method.The experimental results and the kinetic parameters of devolatilization.K and E,have been obtained. A mathematical model for coal devolatiliztion has been proposed.and the model is simple and practical.The predictions of the model are shown to be in agreement with experimental results.

  5. Operational experience with model-based steering in the SLC linac

    International Nuclear Information System (INIS)

    Thompson, K.A.; Himel, T.; Moore, S.; Sanchez-Chopitea, L.; Shoaee, H.

    1989-03-01

    Operational experience with model-driven steering in the linac of the Stanford Linear Collider is discussed. Important issues include two-beam steering, sensitivity of algorithms to faulty components, sources of disagreement with the model, and the effects of the finite resolution of beam position monitors. Methods developed to make the steering algorithms more robust in the presence of such complications are also presented. 5 refs., 1 fig

  6. Stability and Sensitive Analysis of a Model with Delay Quorum Sensing

    Directory of Open Access Journals (Sweden)

    Zhonghua Zhang

    2015-01-01

    Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.

  7. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization

    OpenAIRE

    Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2016-01-01

    Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin ...

  8. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  9. Some Experiences with Numerical Modelling of Overflows

    DEFF Research Database (Denmark)

    Larsen, Torben; Nielsen, L.; Jensen, B.

    2007-01-01

    across the edge of the overflow. To ensure critical flow across the edge, the upstream flow must be subcritical whereas the downstream flow is either supercritical or a free jet. Experimentally overflows are well studied. Based on laboratory experiments and Froude number scaling, numerous accurate...

  10. Quality of experience models for multimedia streaming

    NARCIS (Netherlands)

    Menkovski, V.; Exarchakos, G.; Liotta, A.; Cuadra Sánchez, A.

    2010-01-01

    Understanding how quality is perceived by viewers of multimedia streaming services is essential for efficient management of those services. Quality of Experience (QoE) is a subjective metric that quantifies the perceived quality, which is crucial in the process of optimizing tradeoff between quality

  11. A Cross-Discipline Modeling Capstone Experience

    Science.gov (United States)

    Frazier, Marian L.; LoFaro, Thomas; Pillers Dobler, Carolyn

    2018-01-01

    The Mathematical Association of America (MAA) and the American Statistical Association (ASA) have both updated and revised their curriculum guidelines. The guidelines of both associations recommend that students engage in a "capstone" experience, be exposed to applications, and have opportunities to communicate mathematical and…

  12. INFLUENCE OF MODIFIED BIOFLAVONOIDS UPON EFFECTOR LYMPHOCYTES IN MURINE MODEL OF CONTACT SENSITIVITY

    Directory of Open Access Journals (Sweden)

    D. Z. Albegova

    2015-01-01

    Full Text Available Contact sensitivity reaction (CSR to 2,4-dinitrofluorobenzene (DNFB in mice is a model of in vivo immune response, being an experimental analogue to contact dermatitis in humans. CSR sensitization phase begins after primary contact with antigen, lasting for 10-15 days in humans, and 5-7 days, in mice. Repeated skin exposure to the sensitizing substance leads to its recognition and triggering immune inflammatory mechanisms involving DNFB-specific effector T lymphocytes. The CSR reaches its maximum 18-48 hours after re-exposure to a hapten. There is only scarce information in the literature about effects of flavonoids on CSR, including both stimulatory and inhibitory effects. Flavonoids possessed, predominantly, suppressive effects against the CSR development. In our laboratory, a model of contact sensitivity was reproduced in CBA mice by means of cutaneous sensitization by 2,4-dinitrofluorobenzene. The aim of the study was to identify the mechanisms of immunomodulatory action of quercetin dihydrate and modified bioflavonoids, using the method of adoptive transfer contact sensitivity by splenocytes and T-lymphocytes. As shown in our studies, a 30-min pre-treatment of splenocytes and T-lymphocytes from sensitized mice with modified bioflavonoids before the cell transfer caused complete prevention of contact sensitivity reaction in syngeneic recipient mice. Meanwhile, this effect was not associated with cell death induction due to apoptosis or cytotoxicity. Quercetin dihydrate caused only partially suppression the activity of adaptively formed T-lymphocytes, the contact sensitivity effectors. It was shown that the modified bioflavonoid more stronger suppress adoptive transfer of contact sensitivity in comparison with quercetin dehydrate, without inducing apoptosis of effector cells. Thus, the modified bioflavonoid is a promising compound for further studies in a model of contact sensitivity, due to its higher ability to suppress transfer of CSR with

  13. Modelling the impact of implementing Water Sensitive Urban Design on at a catchment scale

    DEFF Research Database (Denmark)

    Locatelli, Luca; Gabriel, S.; Bockhorn, Britta

    Stormwater management using Water Sensitive Urban Design (WSUD) is expected to be part of future drainage systems. This project aimed to develop a set of hydraulic models of the Harrestrup Å catchment (close to Copenhagen) in order to demonstrate the importance of modeling WSUDs at different scales......, ranging from models of an individual soakaway up to models of a large urban catchment. The models were developed in Mike Urban with a new integrated soakaway model. A small-scale individual soakaway model was used to determine appropriate initial conditions for soakway models. This model was applied...

  14. Analytical Modeling Tool for Design of Hydrocarbon Sensitive Optical Fibers

    Directory of Open Access Journals (Sweden)

    Khalil Al Handawi

    2017-09-01

    Full Text Available Pipelines are the main transportation means for oil and gas products across large distances. Due to the severe conditions they operate in, they are regularly inspected using conventional Pipeline Inspection Gages (PIGs for corrosion damage. The motivation for researching a real-time distributed monitoring solution arose to mitigate costs and provide a proactive indication of potential failures. Fiber optic sensors with polymer claddings provide a means of detecting contact with hydrocarbons. By coating the fibers with a layer of metal similar in composition to that of the parent pipeline, corrosion of this coating may be detected when the polymer cladding underneath is exposed to the surrounding hydrocarbons contained within the pipeline. A Refractive Index (RI change occurs in the polymer cladding causing a loss in intensity of a traveling light pulse due to a reduction in the fiber’s modal capacity. Intensity losses may be detected using Optical Time Domain Reflectometry (OTDR while pinpointing the spatial location of the contact via time delay calculations of the back-scattered pulses. This work presents a theoretical model for the above sensing solution to provide a design tool for the fiber optic cable in the context of hydrocarbon sensing following corrosion of an external metal coating. Results are verified against the experimental data published in the literature.

  15. Analytical Modeling Tool for Design of Hydrocarbon Sensitive Optical Fibers.

    Science.gov (United States)

    Al Handawi, Khalil; Vahdati, Nader; Shiryayev, Oleg; Lawand, Lydia

    2017-09-28

    Pipelines are the main transportation means for oil and gas products across large distances. Due to the severe conditions they operate in, they are regularly inspected using conventional Pipeline Inspection Gages (PIGs) for corrosion damage. The motivation for researching a real-time distributed monitoring solution arose to mitigate costs and provide a proactive indication of potential failures. Fiber optic sensors with polymer claddings provide a means of detecting contact with hydrocarbons. By coating the fibers with a layer of metal similar in composition to that of the parent pipeline, corrosion of this coating may be detected when the polymer cladding underneath is exposed to the surrounding hydrocarbons contained within the pipeline. A Refractive Index (RI) change occurs in the polymer cladding causing a loss in intensity of a traveling light pulse due to a reduction in the fiber's modal capacity. Intensity losses may be detected using Optical Time Domain Reflectometry (OTDR) while pinpointing the spatial location of the contact via time delay calculations of the back-scattered pulses. This work presents a theoretical model for the above sensing solution to provide a design tool for the fiber optic cable in the context of hydrocarbon sensing following corrosion of an external metal coating. Results are verified against the experimental data published in the literature.

  16. Mathematical Modeling: Are Prior Experiences Important?

    Science.gov (United States)

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…

  17. Towards Generic Models of Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Shaker, Mohammad; Abou-Zleikha, Mohamed

    2015-01-01

    Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context-dependent and their applicab...

  18. Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.

    Science.gov (United States)

    Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J

    2014-01-01

    A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.

  19. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  20. Cough reflex sensitivity is increased in the guinea pig model of allergic rhinitis.

    Science.gov (United States)

    Brozmanova, M; Plevkova, J; Tatar, M; Kollarik, M

    2008-12-01

    Increased cough reflex sensitivity is found in patients with allergic rhinitis and may contribute to cough caused by rhinitis. We have reported that cough to citric acid is enhanced in the guinea pig model of allergic rhinitis. Here we address the hypothesis that the cough reflex sensitivity is increased in this model. The data from our previous studies were analyzed for the cough reflex sensitivity. The allergic inflammation in the nose was induced by repeated intranasal instillations of ovalbumin in the ovalbumin-sensitized guinea pigs. Cough was induced by inhalation of doubling concentrations of citric acid (0.05-1.6 M). Cough threshold was defined as the lowest concentration of citric acid causing two coughs (C2, expressed as geometric mean [95% confidence interval]). We found that the cough threshold was reduced in animals with allergic rhinitis. C2 was 0.5 M [0.36-0.71 M] and 0.15 M [0.1-0.23 M] prior and after repeated intranasal instillations of ovalbumin, respectively, Preflex sensitivity. C2 was reduced in animals with allergic rhinitis treated orally with vehicle (0.57 M [0.28-1.1] vs. 0.09 M [0.04-0.2 M], Preflex sensitivity is increased in the guinea pig model of allergic rhinitis. Our results suggest that guinea pig is a suitable model for mechanistic studies of increased cough reflex sensitivity in rhinitis.

  1. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Directory of Open Access Journals (Sweden)

    Hayden J. R. Woodley

    2016-01-01

    Full Text Available The construct of equity sensitivity describes an individual’s preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called Benevolents. Individuals low on equity sensitivity are more outcome oriented, and are described as Entitleds. Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170, two personality models (i.e., the Big Five and HEXACO, the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory, and using both self and peer reports of personality (in Study 3. Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model’s dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  2. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  3. Illustrating sensitivity in environmental fate models using partitioning maps - application to selected contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, T.; Wania, F. [Univ. of Toronto at Scarborough - DPES, Toronto (Canada)

    2004-09-15

    Generic environmental multimedia fate models are important tools in the assessment of the impact of organic pollutants. Because of limited possibilities to evaluate generic models by comparison with measured data and the increasing regulatory use of such models, uncertainties of model input and output are of considerable concern. This led to a demand for sensitivity and uncertainty analyses for the outputs of environmental fate models. Usually, variations of model predictions of the environmental fate of organic contaminants are analyzed for only one or at most a few selected chemicals, even though parameter sensitivity and contribution to uncertainty are widely different for different chemicals. We recently presented a graphical method that allows for the comprehensive investigation of model sensitivity and uncertainty for all neutral organic chemicals simultaneously. This is achieved by defining a two-dimensional hypothetical ''chemical space'' as a function of the equilibrium partition coefficients between air, water, and octanol (K{sub OW}, K{sub AW}, K{sub OA}), and plotting sensitivity and/or uncertainty of a specific model result to each input parameter as function of this chemical space. Here we show how such sensitivity maps can be used to quickly identify the variables with the highest influence on the environmental fate of selected, chlorobenzenes, polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), hexachlorocyclohexanes (HCHs) and brominated flame retardents (BFRs).

  4. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    International Nuclear Information System (INIS)

    Brown, Nicholas R.; Mueller, Donald E.; Patton, Bruce W.; Powers, Jeffrey J.

    2016-01-01

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 "7LiF-BeF_2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.

  5. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nicholas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Mueller, Donald E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Powers, Jeffrey J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-08-31

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. The objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.

  6. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  7. Hybrid rocket engine, theoretical model and experiment

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  8. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  9. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  10. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...

  11. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  12. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  13. Indian Ocean experiments with a coupled model

    Energy Technology Data Exchange (ETDEWEB)

    Wainer, I. [Sao Paulo, Univ. (Brazil). Dept. of Oceanography

    1997-03-01

    A coupled ocean-atmosphere model is used to investigate the equatorial Indian Ocean response to the seasonally varying monsoon winds. Special attention is given to the oceanic response to the spatial distribution and changes in direction of the zonal winds. The Indian Ocean is surrounded by an Asian land mass to the North and an African land mass to the West. The model extends latitudinally between 41 N and 41 S. The asymmetric atmospheric model is driven by a mass source/sink term that is proportional to the sea surface temperature (SST) over the oceans and the heat balance over the land. The ocean is modeled using the Anderson and McCreary reduced-gravity transport model that includes a prognostic equation for the SST. The coupled system is driven by the annual cycle as manifested by zonally symmetric and asymmetric land and ocean heating. They explored the different nature of the equatorial ocean response to various patterns of zonal wind stress forcing in order to isolate the impact of the remote response on the Somali current. The major conclusions are : i) the equatorial response is fundamentally different for easterlies and westerlies, ii) the impact of the remote forcing on the Somali current is a function of the annual cycle, iii) the size of the basin sets the phase of the interference of the remote forcing on the Somali current relative to the local forcing.

  14. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  15. Silicon Carbide Derived Carbons: Experiments and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kertesz, Miklos [Georgetown University, Washington DC 20057

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  16. Deformation of wrought uranium: Experiments and modeling

    Energy Technology Data Exchange (ETDEWEB)

    McCabe, R.J., E-mail: rmccabe@lanl.gov [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Capolungo, L. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)] [UMI 2958 Georgia Tech - CNRS, 57070 Metz (France); Marshall, P.E.; Cady, C.M.; Tome, C.N. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2010-09-15

    The room temperature deformation behavior of wrought polycrystalline uranium is studied using a combination of experimental techniques and polycrystal modeling. Electron backscatter diffraction is used to analyze the primary deformation twinning modes for wrought alpha-uranium. The {l_brace}1 3 0{r_brace}<3 1 0> twinning mode is found to be the most prominent twinning mode, with minor contributions from the '{l_brace}1 7 2{r_brace}'<3 1 2> and {l_brace}1 1 2{r_brace}'<3 7 2>' twin modes. Because of the large number of deformation modes, each with limited deformation systems, a polycrystalline model is employed to identify and quantify the activity of each mode. Model predictions of the deformation behavior and texture development agree reasonably well with experimental measures and provide reliable information about deformation systems.

  17. Uniformity and stability of LiF sensitivity - a review of fourteen years' monitoring experience

    International Nuclear Information System (INIS)

    Grogan, D.; Bradley, R.P.; Mattioli, A.

    1990-01-01

    The Personnel Dosimetry Services of the Bureau of Radiation and Medical Devices have utilised an increasingly large pool of thermoluminescence dosimetry (TLD) plaques since 1976. Between 1975 and 1981, the volume increased from 60,000 to 180,000 to implement a change from films to TLDs. As of 1989, 317,000 plaques are in service. Most, but not all, of the thick and thin LiF-100 chips contained on each dosemeter plaque have been individually calibrated. Because the TLDs have been obtained over a period of fourteen years, it is possible to quantify the variation of response sensitivities for each batch purchased. A continued increase in sensitivity has been noted. Furthermore, because detailed records of use have been maintained, it is also possible to quantify information relating the purchased batch sensitivities to other parameters such as cumulative exposure, time of use, and number of readings. Some selected examples are given. Of most interest is the variation of sensitivity factors for individual LiF crystals in relation to the above parameters. Results for periods of one to fourteen years are presented. (author)

  18. Experiences and perspectives in using telematic prevention on sensitive health issues.

    Science.gov (United States)

    Peltoniemi, Teuvo

    2004-01-01

    The new information and communication technologies, telematics - such as the Internet, telephone services and videoconferencing - are simultaneously both an instrument and a symbol - a sign of progress - but also a potential addiction problem. Sensitive topics - like substances or mental health - bring out all these characteristics of telematics. Therefore the computer world, substances and addictions are closely connected.

  19. A Sensitive and Robust Enzyme Kinetic Experiment Using Microplates and Fluorogenic Ester Substrates

    Science.gov (United States)

    Johnson, R. Jeremy; Hoops, Geoffrey C.; Savas, Christopher J.; Kartje, Zachary; Lavis, Luke D.

    2015-01-01

    Enzyme kinetics measurements are a standard component of undergraduate biochemistry laboratories. The combination of serine hydrolases and fluorogenic enzyme substrates provides a rapid, sensitive, and general method for measuring enzyme kinetics in an undergraduate biochemistry laboratory. In this method, the kinetic activity of multiple protein…

  20. Age-Sensitive Effect of Adolescent Dating Experience on Delinquency and Substance Use

    Science.gov (United States)

    Kim, Ryang Hui

    2013-01-01

    This study uses a developmental perspective and focuses on examining whether the impact of adolescent dating is age-sensitive. Dating at earlier ages is hypothesized to have a stronger effect on adolescent criminal behavior or substance use, but the effect would be weaker as one ages. The data obtained from the National Longitudinal Survey of…

  1. Balancing sensitivity and specificity: sixteen year's of experience from the mammography screening programme in Copenhagen, Denmark

    DEFF Research Database (Denmark)

    Utzon-Frank, Nicolai; Vejborg, Ilse; von Euler-Chelpin, My Catarina

    2011-01-01

    To report on sensitivity and specificity from 7 invitation rounds of the organised, population-based mammography screening programme started in Copenhagen, Denmark, in 1991, and offered biennially to women aged 50-69. Changes over time were related to organisation and technology....

  2. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  3. Position sensitive detection coupled to high-resolution time-of-flight mass spectrometry: Imaging for molecular beam deflection experiments

    International Nuclear Information System (INIS)

    Abd El Rahim, M.; Antoine, R.; Arnaud, L.; Barbaire, M.; Broyer, M.; Clavier, Ch.; Compagnon, I.; Dugourd, Ph.; Maurelli, J.; Rayane, D.

    2004-01-01

    We have developed and tested a high-resolution time-of-flight mass spectrometer coupled to a position sensitive detector for molecular beam deflection experiments. The major achievement of this new spectrometer is to provide a three-dimensional imaging (X and Y positions and time-of-flight) of the ion packet on the detector, with a high acquisition rate and a high resolution on both the mass and the position. The calibration of the experimental setup and its application to molecular beam deflection experiments are discussed

  4. Development of the method of sensitivity improvement of photographic film applicable in high-energy physics experiments

    International Nuclear Information System (INIS)

    Gokieli, V.D.

    1986-01-01

    Sensitivity improvement of photographic films applicable in high-energy physics experiments is discussed. To get optimal operating conditions for photographic film PT-6 to check its physical properties on electron beam and in cosmic rays a set for film samples exposure in visible spectrum and in X-rays is constructed. The set includes a start up device, high-voltage pulse oscillator, shapers, a chamber for the sample exposure, voltage divider and electron oscillograph

  5. Modelling small scale infiltration experiments into bore cores of crystalline rock and break-through curves

    International Nuclear Information System (INIS)

    Hadermann, J.; Jakob, A.

    1987-04-01

    Uranium infiltration experiments for small samples of crystalline rock have been used to model radionuclide transport. The theory, taking into account advection and dispersion in water conducting zones, matrix diffusion out of these, and sorption, contains four independent parameters. It turns out, that the physical variables extracted from those of the best-fit parameters are consistent with values from literature and independent measurements. Moreover, the model results seem to differentiate between various geometries for the water conducting zones. Alpha-autoradiographies corroborate this result. A sensitivity analysis allows for a judgement on parameter dependences. Finally some proposals for further experiments are made. (author)

  6. Evaporation experiments and modelling for glass melts

    NARCIS (Netherlands)

    Limpt, J.A.C. van; Beerkens, R.G.C.

    2007-01-01

    A laboratory test facility has been developed to measure evaporation rates of different volatile components from commercial and model glass compositions. In the set-up the furnace atmosphere, temperature level, gas velocity and batch composition are controlled. Evaporation rates have been measured

  7. Models for Risk Aggregation and Sensitivity Analysis: An Application to Bank Economic Capital

    Directory of Open Access Journals (Sweden)

    Hulusi Inanoglu

    2009-12-01

    Full Text Available A challenge in enterprise risk measurement for diversified financial institutions is developing a coherent approach to aggregating different risk types. This has been motivated by rapid financial innovation, developments in supervisory standards (Basel 2 and recent financial turmoil. The main risks faced - market, credit and operational – have distinct distributional properties, and historically have been modeled in differing frameworks. We contribute to the modeling effort by providing tools and insights to practitioners and regulators. First, we extend the scope of the analysis to liquidity and interest rate risk, having Basel Pillar II of Basel implications. Second, we utilize data from major banking institutions’ loss experience from supervisory call reports, which allows us to explore the impact of business mix and inter-risk correlations on total risk. Third, we estimate and compare alternative established frameworks for risk aggregation (including copula models on the same data-sets across banks, comparing absolute total risk measures (Value-at-Risk – VaR and proportional diversification benefits-PDB, goodness-of-fit (GOF of the model as data as well as the variability of the VaR estimate with respect to sampling error in parameter. This benchmarking and sensitivity analysis suggests that practitioners consider implementing a simple non-parametric methodology (empirical copula simulation- ECS in order to quantify integrated risk, in that it is found to be more conservatism and stable than the other models. We observe that ECS produces 20% to 30% higher VaR relative to the standard Gaussian copula simulation (GCS, while the variance-covariance approximation (VCA is much lower. ECS yields the highest PDBs than other methodologies (127% to 243%, while Archimadean Gumbel copula simulation (AGCS is the lowest (10-21%. Across the five largest banks we fail to find the effect of business mix to exert a directionally consistent impact on

  8. An Animal Model of Trichloroethylene-Induced Skin Sensitization in BALB/c Mice.

    Science.gov (United States)

    Wang, Hui; Zhang, Jia-xiang; Li, Shu-long; Wang, Feng; Zha, Wan-sheng; Shen, Tong; Wu, Changhao; Zhu, Qi-xing

    2015-01-01

    Trichloroethylene (TCE) is a major occupational hazard and environmental contaminant that can cause multisystem disorders in the form of occupational medicamentosa-like dermatitis. Development of dermatitis involves several proinflammatory cytokines, but their role in TCE-mediated dermatitis has not been examined in a well-defined experimental model. In addition, few animal models of TCE sensitization are available, and the current guinea pig model has apparent limitations. This study aimed to establish a model of TCE-induced skin sensitization in BALB/c mice and to examine the role of several key inflammatory cytokines on TCE sensitization. The sensitization rate of dorsal painted group was 38.3%. Skin edema and erythema occurred in TCE-sensitized groups, as seen in 2,4-dinitrochlorobenzene (DNCB) positive control. Trichloroethylene sensitization-positive (dermatitis [+]) group exhibited increased thickness of epidermis, inflammatory cell infiltration, swelling, and necrosis in dermis and around hair follicle, but ear painted group did not show these histological changes. The concentrations of serum proinflammatory cytokines including tumor necrosis factor (TNF)-α, interferon (IFN)-γ, and interleukin (IL)-2 were significantly increased in 24, 48, and 72 hours dermatitis [+] groups treated with TCE and peaked at 72 hours. Deposition of TNF-α, IFN-γ, and IL-2 into the skin tissue was also revealed by immunohistochemistry. We have established a new animal model of skin sensitization induced by repeated TCE stimulations, and we provide the first evidence that key proinflammatory cytokines including TNF-α, IFN-γ, and IL-2 play an important role in the process of TCE sensitization. © The Author(s) 2015.

  9. Portfolio Sensitivity Model for Analyzing Credit Risk Caused by Structural and Macroeconomic Changes

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2008-12-01

    Full Text Available This paper proposes a new model for portfolio sensitivity analysis. The model is suitable for decision support in financial institutions, specifically for portfolio planning and portfolio management. The basic advantage of the model is the ability to create simulations for credit risk predictions in cases when we virtually change portfolio structure and/or macroeconomic factors. The model takes a holistic approach to portfolio management consolidating all organizational segments in the process such as marketing, retail and risk.

  10. How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?

    Science.gov (United States)

    Wen, Jessica; Koo, Soh Myoung; Lape, Nancy

    2018-02-01

    While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. [Application of Fourier amplitude sensitivity test in Chinese healthy volunteer population pharmacokinetic model of tacrolimus].

    Science.gov (United States)

    Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei

    2010-07-01

    In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.

  12. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  13. Tuning the climate sensitivity of a global model to match 20th Century warming

    Science.gov (United States)

    Mauritsen, T.; Roeckner, E.

    2015-12-01

    A climate models ability to reproduce observed historical warming is sometimes viewed as a measure of quality. Yet, for practical reasons historical warming cannot be considered a purely empirical result of the modelling efforts because the desired result is known in advance and so is a potential target of tuning. Here we explain how the latest edition of the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1.2) atmospheric model (ECHAM6.3) had its climate sensitivity systematically tuned to about 3 K; the MPI model to be used during CMIP6. This was deliberately done in order to improve the match to observed 20th Century warming over the previous model generation (MPI-ESM, ECHAM6.1) which warmed too much and had a sensitivity of 3.5 K. In the process we identified several controls on model cloud feedback that confirm recently proposed hypotheses concerning trade-wind cumulus and high-latitude mixed-phase clouds. We then evaluate the model fidelity with centennial global warming and discuss the relative importance of climate sensitivity, forcing and ocean heat uptake efficiency in determining the response as well as possible systematic biases. The activity of targeting historical warming during model development is polarizing the modeling community with 35 percent of modelers stating that 20th Century warming was rated very important to decisive, whereas 30 percent would not consider it at all. Likewise, opinions diverge as to which measures are legitimate means for improving the model match to observed warming. These results are from a survey conducted in conjunction with the first WCRP Workshop on Model Tuning in fall 2014 answered by 23 modelers. We argue that tuning or constructing models to match observed warming to some extent is practically unavoidable, and as such, in many cases might as well be done explicitly. For modeling groups that have the capability to tune both their aerosol forcing and climate sensitivity there is now a unique

  14. The developments and verifications of trace model for IIST LOCA experiments

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, W. X. [Inst. of Nuclear Engineering and Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Wang, J. R.; Lin, H. T. [Inst. of Nuclear Energy Research, Taiwan, No. 1000, Wenhua Rd., Longtan Township, Taoyuan County 32546, Taiwan (China); Shih, C.; Huang, K. C. [Inst. of Nuclear Engineering and Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Dept. of Engineering and System Science, National Tsing-Hua Univ., Taiwan, No. 101, Kuang-Fu Road, Hsinchu 30013, Taiwan (China)

    2012-07-01

    The test facility IIST (INER Integral System Test) is a Reduced-Height and Reduced-Pressure (RHRP) integral test loop, which was constructed for the purposes of conducting thermal hydraulic and safety analysis of the Westinghouse three-loop PWR Nuclear Power Plants. The main purpose of this study is to develop and verify TRACE models of IIST through the IIST small break loss of coolant accident (SBLOCA) experiments. First, two different IIST TRACE models which include a pipe-vessel model and a 3-D vessel component model have been built. The steady state and transient calculation results show that both TRACE models have the ability to simulate the related IIST experiments. Comparing with IIST SBLOCA experiment data, the 3-D vessel component model has shown better simulation capabilities so that it has been chosen for all further thermal hydraulic studies. The second step is the sensitivity studies of two phase multiplier and subcooled liquid multiplier in choked flow model; and two correlation constants in CCFL model respectively. As a result, an appropriate set of multipliers and constants can be determined. In summary, a verified IIST TRACE model with 3D vessel component, and fine-tuned choked flow model and CCFL model is established for further studies on IIST experiments in the future. (authors)

  15. Micro- and nanoflows modeling and experiments

    CERN Document Server

    Rudyak, Valery Ya; Maslov, Anatoly A; Minakov, Andrey V; Mironov, Sergey G

    2018-01-01

    This book describes physical, mathematical and experimental methods to model flows in micro- and nanofluidic devices. It takes in consideration flows in channels with a characteristic size between several hundreds of micrometers to several nanometers. Methods based on solving kinetic equations, coupled kinetic-hydrodynamic description, and molecular dynamics method are used. Based on detailed measurements of pressure distributions along the straight and bent microchannels, the hydraulic resistance coefficients are refined. Flows of disperse fluids (including disperse nanofluids) are considered in detail. Results of hydrodynamic modeling of the simplest micromixers are reported. Mixing of fluids in a Y-type and T-type micromixers is considered. The authors present a systematic study of jet flows, jets structure and laminar-turbulent transition. The influence of sound on the microjet structure is considered. New phenomena associated with turbulization and relaminarization of the mixing layer of microjets are di...

  16. Previous Experience a Model of Practice UNAE

    OpenAIRE

    Ormary Barberi Ruiz; María Dolores Pesántez Palacios

    2017-01-01

    The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP) of the National University of Education (UNAE) of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials), pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subj...

  17. Pyroelectric Energy Harvesting: Model and Experiments

    Science.gov (United States)

    2016-05-01

    consisting of a current source for the pyroelectric current, a dielectric capacitor for the adiabatic charging and discharging, and optionally a resistor to...polarization) in a piezoelectric material. To extract work from the pyroelectric effect, the material acts as the dielectric in a capacitor that is...amplifier was chosen for the setup. The pyroelectric element is commonly modeled as a dielectric capacitor and a current source in parallel, as seen in

  18. Sensitivity analysis on the model to the DO and BODc of the Almendares river

    International Nuclear Information System (INIS)

    Dominguez, J.; Borroto, J.; Hernandez, A.

    2004-01-01

    In the present work, the sensitivity analysis of the model was done, to compare and evaluate the influence of the kinetic coefficients and other parameters, on the DO and BODc. The effect of the BODc and the DO which the river arrives to the studied zone, the influence of the BDO of the discharges and the flow rate, on the DO was modeled. The sensitivity analysis is the base for developing a calibration optimization procedure of the Streeter Phelps model, in order to make easier the process and to increase the precision of predictions. In the other hand, it will contribute to the definition of the strategies to improve river water quality

  19. Sensitivity analysis of hydraulic fracturing Using an extended finite element method for the PKN model

    NARCIS (Netherlands)

    Garikapati, Hasini; Verhoosel, Clemens V.; van Brummelen, Harald; Diez, Pedro; Papadrakakis, M.; Papadopoulos, V.; Stefanou, G.; Plevris, V.

    2016-01-01

    Hydraulic fracturing is a process that is surrounded by uncertainty, as available data on e.g. rock formations is scant and available models are still rudimentary. In this contribution sensitivity analysis is carried out as first step in studying the uncertainties in the model. This is done to

  20. An equivalent circuit approach to the modelling of the dynamics of dye sensitized solar cells

    DEFF Research Database (Denmark)

    Bay, L.; West, K.

    2005-01-01

    A model that can be used to interpret the response of a dye-sensitized photo electrode to intensity-modulated light (intensity modulated voltage spectroscopy, IMVS and intensity modulated photo-current spectroscopy, IMPS) is presented. The model is based on an equivalent circuit approach involvin...

  1. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    Science.gov (United States)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  2. Modeling and sensitivity analysis of consensus algorithm based distributed hierarchical control for dc microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2015-01-01

    of dynamic study. The aim of this paper is to model the complete DC microgrid system in z-domain and perform sensitivity analysis for the complete system. A generalized modeling method is proposed and the system dynamics under different control parameters, communication topologies and communication speed...

  3. Modeling of Yb3+-sensitized Er3+-doped silica waveguide amplifiers

    DEFF Research Database (Denmark)

    Lester, Christian; Bjarklev, Anders Overgaard; Rasmussen, Thomas

    1995-01-01

    A model for Yb3+-sensitized Er3+-doped silica waveguide amplifiers is described and numerically investigated in the small-signal regime. The amplified spontaneous emission in the ytterbium-band and the quenching process between excited erbium ions are included in the model. For pump wavelengths...

  4. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    NARCIS (Netherlands)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; Asseng, Senthold; Baranowski, Piotr; Basso, Bruno; Bodin, Per; Buis, Samuel; Cammarano, Davide; Deligios, Paola; Destain, Marie France; Dumont, Benjamin; Ewert, Frank; Ferrise, Roberto; François, Louis; Gaiser, Thomas; Hlavinka, Petr; Jacquemin, Ingrid; Kersebaum, Kurt Christian; Kollas, Chris; Krzyszczak, Jaromir; Lorite, Ignacio J.; Minet, Julien; Minguez, M.I.; Montesino, Manuel; Moriondo, Marco; Müller, Christoph; Nendel, Claas; Öztürk, Isik; Perego, Alessia; Rodríguez, Alfredo; Ruane, Alex C.; Ruget, Françoise; Sanna, Mattia; Semenov, Mikhail A.; Slawinski, Cezary; Stratonovitch, Pierre; Supit, Iwan; Waha, Katharina; Wang, Enli; Wu, Lianhai; Zhao, Zhigan; Rötter, Reimund P.

    2018-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in

  5. Quantification of remodeling parameter sensitivity - assessed by a computer simulation model

    DEFF Research Database (Denmark)

    Thomsen, J.S.; Mosekilde, Li.; Mosekilde, Erik

    1996-01-01

    We have used a computer simulation model to evaluate the effect of several bone remodeling parameters on vertebral cancellus bone. The menopause was chosen as the base case scenario, and the sensitivity of the model to the following parameters was investigated: activation frequency, formation bal....... However, the formation balance was responsible for the greater part of total mass loss....

  6. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change</