WorldWideScience

Sample records for model sensitivity tests

  1. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    1994-01-01

    The work done on this project focused on two LAMPF experiments. The MEGA experiment is a high-sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → eγ]/[μ eν μ ν e ] ∼ 10 -13 , will be over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → eν μ ν e to test the predictions V-A theory of weak interactions. In this experiment the uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value. The detectors are operational, and data taking has begun

  2. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.

    1993-01-01

    The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay μ → e γ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → e γ]/[μ → ev μ v e ] ∼ 10 -13 , is over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → ev μ v e to test the V-A theory of weak interactions. The uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value

  3. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  4. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.

    1992-01-01

    The work done on this project was focussed mainly on LAMPF experiment E969 known as the MEGA experiment, a high sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ→eγ]/[μ→e ν μ ν e ] ∼10 -13 is over two orders of magnitude better than previously reported values. The work done on MEGA during this period was divided between that done at Valparaiso University and that done at LAMPF. In addition, some contributions were made to a proposal to the LAMPF PAC to perform a precision measurement of the Michel ρ parameter, described below

  5. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization

    Science.gov (United States)

    Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2016-01-01

    Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin sensitization responses for a set of 135 unique chemicals was low (R = 28-43%), although several chemical classes had high concordance. We have succeeded to develop predictive QSAR models of all available human data with the external correct classification rate of 71%. A consensus model integrating concordant QSAR predictions and LLNA results afforded a higher CCR of 82% but at the expense of the reduced external dataset coverage (52%). We used the developed QSAR models for virtual screening of CosIng database and identified 1061 putative skin sensitizers; for seventeen of these compounds, we found published evidence of their skin sensitization effects. Models reported herein provide more accurate alternative to LLNA testing for human skin sensitization assessment across diverse chemical data. In addition, they can also be used to guide the structural optimization of toxic compounds to reduce their skin sensitization potential. PMID:28630595

  6. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  7. [Application of Fourier amplitude sensitivity test in Chinese healthy volunteer population pharmacokinetic model of tacrolimus].

    Science.gov (United States)

    Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei

    2010-07-01

    In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.

  8. Sensitivity testing practice on pre-processing parameters in hard and soft coupled modeling

    Directory of Open Access Journals (Sweden)

    Z. Ignaszak

    2010-01-01

    Full Text Available This paper pays attention to the problem of practical applicability of coupled modeling with the use of hard and soft models types and necessity of adapted to that models data base possession. The data base tests results for cylindrical 30 mm diameter casting made of AlSi7Mg alloy were presented. In simulation tests that were applied the Calcosoft system with CAFE (Cellular Automaton Finite Element module. This module which belongs to „multiphysics” models enables structure prediction of complete casting with division of columnar and equiaxed crystals zones of -phase. Sensitivity tests of coupled model on the particular values parameters changing were made. On these basis it was determined the relations of CET (columnar-to-equaiaxed transition zone position influence. The example of virtual structure validation based on real structure with CET zone location and grain size was shown.

  9. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  10. Local Sensitivity and Diagnostic Tests

    NARCIS (Netherlands)

    Magnus, J.R.; Vasnev, A.L.

    2004-01-01

    In this paper we confront sensitivity analysis with diagnostic testing.Every model is misspecified, but a model is useful if the parameters of interest (the focus) are not sensitive to small perturbations in the underlying assumptions. The study of the e ect of these violations on the focus is

  11. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  12. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  13. Anxiety sensitivity and suicide risk among firefighters: A test of the depression-distress amplification model.

    Science.gov (United States)

    Stanley, Ian H; Smith, Lia J; Boffa, Joseph W; Tran, Jana K; Schmidt, N Brad; Joiner, Thomas E; Vujanovic, Anka A

    2018-04-07

    Firefighters represent an occupational group at increased suicide risk. How suicidality develops among firefighters is poorly understood. The depression-distress amplification model posits that the effects of depression symptoms on suicide risk will be intensified in the context of anxiety sensitivity (AS) cognitive concerns. The current study tested this model among firefighters. Overall, 831 firefighters participated (mean [SD] age = 38.37 y [8.53 y]; 94.5% male; 75.2% White). The Center for Epidemiologic Studies Depression Scale (CES-D), Anxiety Sensitivity Index-3 (ASI-3), and Suicidal Behaviors Questionnaire-Revised (SBQ-R) were utilized to assess for depression symptoms, AS concerns (cognitive, physical, social), and suicide risk, respectively. Linear regression interaction models were tested. The effects of elevated depression symptoms on increased suicide risk were augmented when AS cognitive concerns were also elevated. Unexpectedly, depression symptoms also interacted with AS social concerns; however, consistent with expectations, depression symptoms did not interact with AS physical concerns in the prediction of suicide risk. In the context of elevated depression symptoms, suicide risk is potentiated among firefighters reporting elevated AS cognitive and AS social concerns. Findings support and extend the depression-distress amplification model of suicide risk within a sample of firefighters. Interventions that successfully impact AS concerns may, in turn, mitigate suicide risk among this at-risk population. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization

    OpenAIRE

    Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2016-01-01

    Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin ...

  15. In vitro evaluation of matrix metalloproteinases as predictive testing for nickel, a model sensitizing agent

    International Nuclear Information System (INIS)

    Lamberti, Monica; Perfetto, Brunella; Costabile, Teresa; Canozo, Nunzia; Baroni, Adone; Liotti, Francesco; Sannolo, Nicola; Giuliano, Mariateresa

    2004-01-01

    The identification of potential damage due to chemical exposure in the workplace is a major health and regulatory concern. Traditional tests that measure both sensitization and elicitation responses require the use of animals. An alternative to this widespread use of experimental animals could have a crucial impact on risk assessment, especially for the preliminary screening of new molecules. We developed an in vitro model for the screening of potential toxic compounds. Human keratinocytes (HaCat) were used as target cells while matrix metalloproteinases (MMP) were selected as responders because they are key enzymes involved in extracellular matrix (ECM) degradation in physiological and pathological conditions. Chemical exposure was performed using nickel sulphate as a positive tester. Nickel contact induced upregulation of MMP-2 and IL-8 mRNA production. Molecular activation occurred even at very low nickel concentrations even though no phenotypic changes were observed. MMP-9 accumulation was found in the medium of treated cells with respect to controls. These observations led to the hypothesis that even minimal exposure can accumulate transcriptional activity resulting in long-term clinical signs after contact. Our simple in vitro model can be applied as a useful preliminary complement to the animal studies to screen the effects of new potential toxic compounds

  16. Culture and Youth Psychopathology: Testing the Syndromal Sensitivity Model in Thai and American Adolescents

    Science.gov (United States)

    Weisz, John R.; Weiss, Bahr; Suwanlert, Somsong; Chaiyasit, Wanchai

    2006-01-01

    Current widespread use of the same youth assessment measures and scales across different nations assumes that youth psychopathology syndromes do not differ meaningfully across nations. By contrast, the authors' syndromal sensitivity model posits 3 processes through which cultural differences can lead to cross-national differences in…

  17. Design and clinical pilot testing of the model-based dynamic insulin sensitivity and secretion test (DISST).

    Science.gov (United States)

    Lotz, Thomas F; Chase, J Geoffrey; McAuley, Kirsten A; Shaw, Geoffrey M; Docherty, Paul D; Berkeley, Juliet E; Williams, Sheila M; Hann, Christopher E; Mann, Jim I

    2010-11-01

    Insulin resistance is a significant risk factor in the pathogenesis of type 2 diabetes. This article presents pilot study results of the dynamic insulin sensitivity and secretion test (DISST), a high-resolution, low-intensity test to diagnose insulin sensitivity (IS) and characterize pancreatic insulin secretion in response to a (small) glucose challenge. This pilot study examines the effect of glucose and insulin dose on the DISST, and tests its repeatability. DISST tests were performed on 16 subjects randomly allocated to low (5 g glucose, 0.5 U insulin), medium (10 g glucose, 1 U insulin) and high dose (20 g glucose, 2 U insulin) protocols. Two or three tests were performed on each subject a few days apart. Average variability in IS between low and medium dose was 10.3% (p=.50) and between medium and high dose 6.0% (p=.87). Geometric mean variability between tests was 6.0% (multiplicative standard deviation (MSD) 4.9%). Geometric mean variability in first phase endogenous insulin response was 6.8% (MSD 2.2%). Results were most consistent in subjects with low IS. These findings suggest that DISST may be an easily performed dynamic test to quantify IS with high resolution, especially among those with reduced IS. © 2010 Diabetes Technology Society.

  18. Testing the Nanoparticle-Allostatic Cross Adaptation-Sensitization Model for Homeopathic Remedy Effects

    Science.gov (United States)

    Bell, Iris R.; Koithan, Mary; Brooks, Audrey J.

    2012-01-01

    Key concepts of the Nanoparticle-Allostatic Cross-Adaptation-Sensitization (NPCAS) Model for the action of homeopathic remedies in living systems include source nanoparticles as low level environmental stressors, heterotypic hormesis, cross-adaptation, allostasis (stress response network), time-dependent sensitization with endogenous amplification and bidirectional change, and self-organizing complex adaptive systems. The model accommodates the requirement for measurable physical agents in the remedy (source nanoparticles and/or source adsorbed to silica nanoparticles). Hormetic adaptive responses in the organism, triggered by nanoparticles; bipolar, metaplastic change, dependent on the history of the organism. Clinical matching of the patient’s symptom picture, including modalities, to the symptom pattern that the source material can cause (cross-adaptation and cross-sensitization). Evidence for nanoparticle-related quantum macro-entanglement in homeopathic pathogenetic trials. This paper examines research implications of the model, discussing the following hypotheses: Variability in nanoparticle size, morphology, and aggregation affects remedy properties and reproducibility of findings. Homeopathic remedies modulate adaptive allostatic responses, with multiple dynamic short- and long-term effects. Simillimum remedy nanoparticles, as novel mild stressors corresponding to the organism’s dysfunction initiate time-dependent cross-sensitization, reversing the direction of dysfunctional reactivity to environmental stressors. The NPCAS model suggests a way forward for systematic research on homeopathy. The central proposition is that homeopathic treatment is a form of nanomedicine acting by modulation of endogenous adaptation and metaplastic amplification processes in the organism to enhance long-term systemic resilience and health. PMID:23290882

  19. Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates

    Science.gov (United States)

    Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.

    2010-12-01

    There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.

  20. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    International Nuclear Information System (INIS)

    Cassardo, C.; Carena, E.; Longhetto, A.

    1998-01-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type

  1. Tests of methods and software for set-valued model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Janssen PHM; Sanders R; CWM

    1995-01-01

    Testen worden besproken die zijn uitgevoerd op methoden en software voor calibratie middels 'rotated-random-scanning', en voor gevoeligheidsanalyse op basis van de 'dominant direction analysis' en de 'generalized sensitivity analysis'. Deze technieken werden

  2. Effects of snow grain shape on climate simulations: sensitivity tests with the Norwegian Earth System Model

    Directory of Open Access Journals (Sweden)

    P. Räisänen

    2017-12-01

    Full Text Available Snow consists of non-spherical grains of various shapes and sizes. Still, in radiative transfer calculations, snow grains are often treated as spherical. This also applies to the computation of snow albedo in the Snow, Ice, and Aerosol Radiation (SNICAR model and in the Los Alamos sea ice model, version 4 (CICE4, both of which are employed in the Community Earth System Model and in the Norwegian Earth System Model (NorESM. In this study, we evaluate the effect of snow grain shape on climate simulated by NorESM in a slab ocean configuration of the model. An experiment with spherical snow grains (SPH is compared with another (NONSPH in which the snow shortwave single-scattering properties are based on a combination of three non-spherical snow grain shapes optimized using measurements of angular scattering by blowing snow. The key difference between these treatments is that the asymmetry parameter is smaller in the non-spherical case (0.77–0.78 in the visible region than in the spherical case ( ≈  0.89. Therefore, for the same effective snow grain size (or equivalently, the same specific projected area, the snow broadband albedo is higher when assuming non-spherical rather than spherical snow grains, typically by 0.02–0.03. Considering the spherical case as the baseline, this results in an instantaneous negative change in net shortwave radiation with a global-mean top-of-the-model value of ca. −0.22 W m−2. Although this global-mean radiative effect is rather modest, the impacts on the climate simulated by NorESM are substantial. The global annual-mean 2 m air temperature in NONSPH is 1.17 K lower than in SPH, with substantially larger differences at high latitudes. The climatic response is amplified by strong snow and sea ice feedbacks. It is further demonstrated that the effect of snow grain shape could be largely offset by adjusting the snow grain size. When assuming non-spherical snow grains with the parameterized grain

  3. Effects of snow grain shape on climate simulations: sensitivity tests with the Norwegian Earth System Model

    Science.gov (United States)

    Räisänen, Petri; Makkonen, Risto; Kirkevåg, Alf; Debernard, Jens B.

    2017-12-01

    Snow consists of non-spherical grains of various shapes and sizes. Still, in radiative transfer calculations, snow grains are often treated as spherical. This also applies to the computation of snow albedo in the Snow, Ice, and Aerosol Radiation (SNICAR) model and in the Los Alamos sea ice model, version 4 (CICE4), both of which are employed in the Community Earth System Model and in the Norwegian Earth System Model (NorESM). In this study, we evaluate the effect of snow grain shape on climate simulated by NorESM in a slab ocean configuration of the model. An experiment with spherical snow grains (SPH) is compared with another (NONSPH) in which the snow shortwave single-scattering properties are based on a combination of three non-spherical snow grain shapes optimized using measurements of angular scattering by blowing snow. The key difference between these treatments is that the asymmetry parameter is smaller in the non-spherical case (0.77-0.78 in the visible region) than in the spherical case ( ≈ 0.89). Therefore, for the same effective snow grain size (or equivalently, the same specific projected area), the snow broadband albedo is higher when assuming non-spherical rather than spherical snow grains, typically by 0.02-0.03. Considering the spherical case as the baseline, this results in an instantaneous negative change in net shortwave radiation with a global-mean top-of-the-model value of ca. -0.22 W m-2. Although this global-mean radiative effect is rather modest, the impacts on the climate simulated by NorESM are substantial. The global annual-mean 2 m air temperature in NONSPH is 1.17 K lower than in SPH, with substantially larger differences at high latitudes. The climatic response is amplified by strong snow and sea ice feedbacks. It is further demonstrated that the effect of snow grain shape could be largely offset by adjusting the snow grain size. When assuming non-spherical snow grains with the parameterized grain size increased by ca. 70 %, the

  4. Acute sensitivity of freshwater mollusks and commonly tested invertebrates to select chemicals with different toxic models of action

    Science.gov (United States)

    Previous studies indicate that freshwater mollusks are more sensitive than commonly tested organisms to some chemicals, such as copper and ammonia. Nevertheless, mollusks are generally under-represented in toxicity databases. Studies are needed to generate data with which to comp...

  5. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  6. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available observation of atmospheric CO(sub2) concentrations at fixed monitoring stations. The LPDM model, which can be used to derive the sensitivity matrix used in an inversion, was run for each potential site for the months of July (representative of the Southern...

  7. Context Sensitive Modeling of Cancer Drug Sensitivity.

    Directory of Open Access Journals (Sweden)

    Bo-Juen Chen

    Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.

  8. An in vitro method for detecting chemical sensitization using human reconstructed skin models and its applicability to cosmetic, pharmaceutical, and medical device safety testing.

    Science.gov (United States)

    McKim, James M; Keller, Donald J; Gorski, Joel R

    2012-12-01

    Chemical sensitization is a serious condition caused by small reactive molecules and is characterized by a delayed type hypersensitivity known as allergic contact dermatitis (ACD). Contact with these molecules via dermal exposure represent a significant concern for chemical manufacturers. Recent legislation in the EU has created the need to develop non-animal alternative methods for many routine safety studies including sensitization. Although most of the alternative research has focused on pure chemicals that possess reasonable solubility properties, it is important for any successful in vitro method to have the ability to test compounds with low aqueous solubility. This is especially true for the medical device industry where device extracts must be prepared in both polar and non-polar vehicles in order to evaluate chemical sensitization. The aim of this research was to demonstrate the functionality and applicability of the human reconstituted skin models (MatTek Epiderm(®) and SkinEthic RHE) as a test system for the evaluation of chemical sensitization and its potential use for medical device testing. In addition, the development of the human 3D skin model should allow the in vitro sensitization assay to be used for finished product testing in the personal care, cosmetics, and pharmaceutical industries. This approach combines solubility, chemical reactivity, cytotoxicity, and activation of the Nrf2/ARE expression pathway to identify and categorize chemical sensitizers. Known chemical sensitizers representing extreme/strong-, moderate-, weak-, and non-sensitizing potency categories were first evaluated in the skin models at six exposure concentrations ranging from 0.1 to 2500 µM for 24 h. The expression of eight Nrf2/ARE, one AhR/XRE and two Nrf1/MRE controlled gene were measured by qRT-PCR. The fold-induction at each exposure concentration was combined with reactivity and cytotoxicity data to determine the sensitization potential. The results demonstrated that

  9. Using High-Resolution Data to Test Parameter Sensitivity of the Distributed Hydrological Model HydroGeoSphere

    Directory of Open Access Journals (Sweden)

    Thomas Cornelissen

    2016-05-01

    Full Text Available Parameterization of physically based and distributed hydrological models for mesoscale catchments remains challenging because the commonly available data base is insufficient for calibration. In this paper, we parameterize a mesoscale catchment for the distributed model HydroGeoSphere by transferring evapotranspiration parameters calibrated at a highly-equipped headwater catchment in addition to literature data. Based on this parameterization, the sensitivity of the mesoscale catchment to spatial variability in land use, potential evapotranspiration and precipitation and of the headwater catchment to mesoscale soil and land use data was conducted. Simulations of the mesoscale catchment with transferred parameters reproduced daily discharge dynamics and monthly evapotranspiration of grassland, deciduous and coniferous vegetation in a satisfactory manner. Precipitation was the most sensitive input data with respect to total runoff and peak flow rates, while simulated evapotranspiration components and patterns were most sensitive to spatially distributed land use parameterization. At the headwater catchment, coarse soil data resulted in a change in runoff generating processes based on the interplay between higher wetness prior to a rainfall event, enhanced groundwater level rise and accordingly, lower transpiration rates. Our results indicate that the direct transfer of parameters is a promising method to benefit highly equipped simulations of the headwater catchments.

  10. Testing a river basin model with sensitivity analysis and autocalibration for an agricultural catchment in SW Finland

    Directory of Open Access Journals (Sweden)

    S. TATTARI

    2008-12-01

    Full Text Available Modeling tools are needed to assess (i the amounts of loading from agricultural sources to water bodies as well as (ii the alternative management options in varying climatic conditions. These days, the implementation of Water Framework Directive (WFD has put totally new requirements also for modeling approaches. The physically based models are commonly not operational and thus the usability of these models is restricted for a few selected catchments. But the rewarding feature of these process-based models is an option to study the effect of protection measures on a catchment scale and, up to a certain point, a possibility to upscale the results. In this study, the parameterization of the SWAT model was developed in terms of discharge dynamics and nutrient loads, and a sensitivity analysis regarding discharge and sediment concentration was made. The SWAT modeling exercise was carried out for a 2nd order catchment (Yläneenjoki, 233 km2 of the Eurajoki river basin in southwestern Finland. The Yläneenjoki catchment has been intensively monitored during the last 14 years. Hence, there was enough background information available for both parameter setup and calibration. In addition to load estimates, SWAT also offers possibility to assess the effects of various agricultural management actions like fertilization, tillage practices, choice of cultivated plants, buffer strips, sedimentation ponds and constructed wetlands (CWs on loading. Moreover, information on local agricultural practices and the implemented and planned protective measures were readily available thanks to aware farmers and active authorities. Here, we studied how CWs can reduce the nutrient load at the outlet of the Yläneenjoki river basin. The results suggested that sensitivity analysis and autocalibration tools incorporated in the model are useful by pointing out the most influential parameters, and that flow dynamics and annual loading values can be modeled with reasonable

  11. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    storage systems, where the actual values of the data is not relevant for the behavior of the system. For many systems the values are important. For instance the control flow of the system can be dependent on the input values. We call this type of system data sensitive, as the execution is sensitive...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...

  12. Sensitivity testing of the model set-up used for calculation of photochemical ozone creation potentials (POCP) under European conditions

    Energy Technology Data Exchange (ETDEWEB)

    Altenstedt, J.; Pleijel, K.

    1998-02-01

    Photochemical Ozone Creation Potentials (POCP) is a method to rank VOC, relative to other VOC, according to their ability to produce ground level ozone. To obtain POCP values valid under European conditions, a critical analysis of the POCP concept has been performed using the IVL photochemical trajectory model. The critical analysis has concentrated on three VOC (ethene, n-butane and o-xylene) and has analysed the effect on their POCP values when different model parameters were varied. The three species were chosen because of their different degradation mechanisms in the atmosphere and thus their different abilities to produce ozone. The model parameters which have been tested include background emissions, initial concentrations, dry deposition velocities, the features of the added point source and meteorological parameters. The critical analysis shows that the background emissions of NO{sub x} and VOC have a critical impact on the POCP values. The hour of the day for the point source emission also shows a large influence on the POCP values. Other model parameters which have been studied have not shown such large influence on the POCP values. Based on the critical analysis a model set-up for calculation of POCP is defined. The variations in POCP values due to changes in the background emissions of NO{sub x} and VOC are so large that they can not be disregarded in the calculation of POCP. It is recommended to calculate POCP ranges based on the extremes in POCP values instead of calculating site specific POCP values. Four individual emission scenarios which produced the extremes in POCP values in the analysis have been selected for future calculation of POCP ranges. The scenarios are constructed based on the emissions in Europe and the resulting POCP ranges are thus intended to be applicable within Europe 67 refs, 61 figs, 16 tabs

  13. Effects of snow grain non-sphericity on climate simulations: Sensitivity tests with the NorESM model

    Science.gov (United States)

    Räisänen, Petri; Makkonen, Risto; Kirkevåg, Alf

    2017-04-01

    optically thick snowpack with a given snow grain effective size, the absorbing aerosol RE is smaller for non-spherical than for spherical snow grains. The reason for this is that due to the lower asymmetry parameter of the non-spherical snow grains, solar radiation does not penetrate as deep in snow as in the case of spherical snow grains. However, in a climate model simulation, the RE is sensitive to patterns of aerosol deposition and simulated snow cover. In fact, the global land-area mean absorbing aerosol RE is larger in the NONSPH than SPH experiment (0.193 vs. 0.168 W m-2), owing to later snowmelt in spring.

  14. APR1400 Fluidic Device Sensitivity Test

    International Nuclear Information System (INIS)

    Choi, Nam Hyun; Chu, In Cheol; Min, Kyong Ho; Song, Chul Hwa

    2005-12-01

    In the safety injection tank at the emergency core cooling system of APR1400, a new safety design feature, passive fluidic device is equipped which includes no active driving system. It is essential to evaluate the new design feature with various experiments. For this reason, three categories of sensitivity tests have been performed in the present study. As the first sensitivity experiment, the effect of the height of the stand pipe was investigated. The second sensitivity test was conducted with removing the insert plate gasket to examine its effect. The effect of the expansion of the control nozzle width was ascertained from the third sensitivity test. The results of each test showed that the passive fluidic device which will be equipped in the SIT tank of APR1400 has great integrity and repeatability

  15. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  16. Forming limit curves of DP600 determined in high-speed Nakajima tests and predicted by two different strain-rate-sensitive models

    Science.gov (United States)

    Weiß-Borkowski, Nathalie; Lian, Junhe; Camberg, Alan; Tröster, Thomas; Münstermann, Sebastian; Bleck, Wolfgang; Gese, Helmut; Richter, Helmut

    2018-05-01

    Determination of forming limit curves (FLC) to describe the multi-axial forming behaviour is possible via either experimental measurements or theoretical calculations. In case of theoretical determination, different models are available and some of them consider the influence of strain rate in the quasi-static and dynamic strain rate regime. Consideration of the strain rate effect is necessary as many material characteristics such as yield strength and failure strain are affected by loading speed. In addition, the start of instability and necking depends not only on the strain hardening coefficient but also on the strain rate sensitivity parameter. Therefore, the strain rate dependency of materials for both plasticity and the failure behaviour is taken into account in crash simulations for strain rates up to 1000 s-1 and FLC can be used for the description of the material's instability behaviour at multi-axial loading. In this context, due to the strain rate dependency of the material behaviour, an extrapolation of the quasi-static FLC to dynamic loading condition is not reliable. Therefore, experimental high-speed Nakajima tests or theoretical models shall be used to determine the FLC at high strain rates. In this study, two theoretical models for determination of FLC at high strain rates and results of experimental high-speed Nakajima tests for a DP600 are presented. One of the theoretical models is the numerical algorithm CRACH as part of the modular material and failure model MF GenYld+CrachFEM 4.2, which is based on an initial imperfection. Furthermore, the extended modified maximum force criterion considering the strain rate effect is also used to predict the FLC. These two models are calibrated by the quasi-static and dynamic uniaxial tensile tests and bulge tests. The predictions for the quasi-static and dynamic FLC by both models are presented and compared with the experimental results.

  17. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  18. Sensitive visual test for concave diffraction gratings.

    Science.gov (United States)

    Bruner, E. C., Jr.

    1972-01-01

    A simple visual test for the evaluation of concave diffraction gratings is described. It is twice as sensitive as the Foucault knife edge test, from which it is derived, and has the advantage that the images are straight and free of astigmatism. It is particularly useful for grating with high ruling frequency where the above image faults limit the utility of the Foucault test. The test can be interpreted quantitatively and can detect zonal grating space errors of as little as 0.1 A.

  19. Parametric Sensitivity Tests- European PEM Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature......As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests......, pressure, and stoichiometry at varying current density. Furthermore, procedures for polarization curve recording were also tested both in ascending and descending current directions....

  20. Modelling nitrous oxide emissions from mown-grass and grain-cropping systems: Testing and sensitivity analysis of DailyDayCent using high frequency measurements.

    Science.gov (United States)

    Senapati, Nimai; Chabbi, Abad; Giostri, André Faé; Yeluripati, Jagadeesh B; Smith, Pete

    2016-12-01

    The DailyDayCent biogeochemical model was used to simulate nitrous oxide (N 2 O) emissions from two contrasting agro-ecosystems viz. a mown-grassland and a grain-cropping system in France. Model performance was tested using high frequency measurements over three years; additionally a local sensitivity analysis was performed. Annual N 2 O emissions of 1.97 and 1.24kgNha -1 year -1 were simulated from mown-grassland and grain-cropland, respectively. Measured and simulated water filled pore space (r=0.86, ME=-2.5%) and soil temperature (r=0.96, ME=-0.63°C) at 10cm soil depth matched well in mown-grassland. The model predicted cumulative hay and crop production effectively. The model simulated soil mineral nitrogen (N) concentrations, particularly ammonium (NH 4 + ), reasonably, but the model significantly underestimated soil nitrate (NO 3 - ) concentration under both systems. In general, the model effectively simulated the dynamics and the magnitude of daily N 2 O flux over the whole experimental period in grain-cropland (r=0.16, ME=-0.81gNha -1 day -1 ), with reasonable agreement between measured and modelled N 2 O fluxes for the mown-grassland (r=0.63, ME=-0.65gNha -1 day -1 ). Our results indicate that DailyDayCent has potential for use as a tool for predicting overall N 2 O emissions in the study region. However, in-depth analysis shows some systematic discrepancies between measured and simulated N 2 O fluxes on a daily basis. The current exercise suggests that the DailyDayCent may need improvement, particularly the sub-module responsible for N transformations, for better simulating soil mineral N, especially soil NO 3 - concentration, and N 2 O flux on a daily basis. The sensitivity analysis shows that many factors such as climate change, N-fertilizer use, input uncertainty and parameter value could influence the simulation of N 2 O emissions. Sensitivity estimation also helped to identify critical parameters, which need careful estimation or site

  1. The mobilisation model and parameter sensitivity

    International Nuclear Information System (INIS)

    Blok, B.M.

    1993-12-01

    In the PRObabillistic Safety Assessment (PROSA) of radioactive waste in a salt repository one of the nuclide release scenario's is the subrosion scenario. A new subrosion model SUBRECN has been developed. In this model the combined effect of a depth-dependent subrosion, glass dissolution, and salt rise has been taken into account. The subrosion model SUBRECN and the implementation of this model in the German computer program EMOS4 is presented. A new computer program PANTER is derived from EMOS4. PANTER models releases of radionuclides via subrosion from a disposal site in a salt pillar into the biosphere. For uncertainty and sensitivity analyses the new subrosion model Latin Hypercube Sampling has been used for determine the different values for the uncertain parameters. The influence of the uncertainty in the parameters on the dose calculations has been investigated by the following sensitivity techniques: Spearman Rank Correlation Coefficients, Partial Rank Correlation Coefficients, Standardised Rank Regression Coefficients, and the Smirnov Test. (orig./HP)

  2. Variation of a test's sensitivity and specificity with disease prevalence.

    Science.gov (United States)

    Leeflang, Mariska M G; Rutjes, Anne W S; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M M

    2013-08-06

    Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23 meta-analyses, each of which included 10-39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p disease prevalence; there was no such systematic effect for sensitivity. The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.

  3. On the use of sensitivity tests in seismic tomography

    NARCIS (Netherlands)

    Rawlinson, N.; Spakman, W.

    2016-01-01

    Sensitivity analysis with synthetic models is widely used in seismic tomography as a means for assessing the spatial resolution of solutions produced by, in most cases, linear or iterative nonlinear inversion schemes. The most common type of synthetic reconstruction test is the so-called

  4. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  5. Validation of Clinical Testing for Warfarin Sensitivity

    Science.gov (United States)

    Langley, Michael R.; Booker, Jessica K.; Evans, James P.; McLeod, Howard L.; Weck, Karen E.

    2009-01-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 −1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses. PMID:19324988

  6. Re-test reliability of gustatory testing and introduction of the sensitive Taste-Drop-Test

    DEFF Research Database (Denmark)

    Fjaeldstad, A; Niklassen, A; Fernandes, H

    2018-01-01

    . Testing gustatory function can be important for diagnostics and assessment of treatment effects. However, the gustatory tests applied are required to be both sensitive and reliable.In this study, we investigate the re-test validity of popular Taste Strips gustatory test for gustatory screening....... Furthermore, we introduce a new sensitive Taste-Drop-Test, which was found to be superior for detecting a more accurate measure of tastant sensitivity....

  7. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 2: Sensitivity tests and results

    Science.gov (United States)

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational–Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by

  8. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 2: Sensitivity Tests and Results

    Science.gov (United States)

    Norris, Peter M.; da Silva, Arlindo M.

    2016-01-01

    Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational-Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by

  9. Testing of a Model with Latino Patients That Explains the Links Among Patient-Perceived Provider Cultural Sensitivity, Language Preference, and Patient Treatment Adherence.

    Science.gov (United States)

    Nielsen, Jessica D Jones; Wall, Whitney; Tucker, Carolyn M

    2016-03-01

    Disparities in treatment adherence based on race and ethnicity are well documented but poorly understood. Specifically, the causes of treatment nonadherence among Latino patients living in the USA are complex and include cultural and language barriers. The purpose of this study was to examine whether patients' perceptions in patient-provider interactions (i.e., trust in provider, patient satisfaction, and patient sense of interpersonal control in patient-provider interactions) mediate any found association between patient-perceived provider cultural sensitivity (PCS) and treatment adherence among English-preferred Latino (EPL) and Spanish-preferred Latino (SPL) patients. Data from 194 EPL patients and 361 SPL patients were obtained using questionnaires. A series of language-specific structural equation models were conducted to test the relationship between patient-perceived PCS and patient treatment adherence and the examined mediators of this relationship among the Latino patients. No significant direct effects of patient-perceived PCS on general treatment adherence were found. However, as hypothesized, several significant indirect effects emerged. Preferred language appeared to have moderating effects on the relationships between patient-perceived PCS and general treatment adherence. These results suggest that interventions to promote treatment adherence among Latino patients should likely include provider training to foster patient-defined PCS, trust in provider, and patient satisfaction with care. Furthermore, this training needs to be customized to be suitable for providing care to Latino patients who prefer speaking Spanish and Latino patients who prefer speaking English.

  10. Testing the sensitivity of Staphylococcus aureus antibiotics

    Directory of Open Access Journals (Sweden)

    Marioara Nicoleta FILIMON

    2009-11-01

    Full Text Available This study has in view to establish and test the sensitivity of Staphylococcus aureus antibiotics. There are different injuries caused by superficial skin infections: from simple pimples to infections that endanger our lives, like an abscess, furuncle septicemia, meningitis, toxic food, urinary tract infection at sexually active young women. Samples have been taken from 30 people with staphylococcus infections. They were nineteen women and eleven men, between the age of 2 and 79. During this study some antibiograms have been made, based on pharyngeal exudates, acne secretion and urine culture. It has been established that the most efficient recommended antibiotics are: oxacilin, erythromycin, rifampicin and ciprofloxacin. The penicillin turned out to be less efficient to remove and destroy the Staphylococcus aureus species.

  11. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    Science.gov (United States)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  12. Non-animal sensitization testing: state-of-the-art.

    Science.gov (United States)

    Vandebriel, Rob J; van Loveren, Henk

    2010-05-01

    Predictive tests to identify the sensitizing properties of chemicals are carried out using animals. In the European Union timelines for phasing out many standard animal tests were established for cosmetics. Following this policy, the new European Chemicals Legislation (REACH) favors alternative methods, if validated and appropriate. In this review the authors aim to provide a state-of-the art overview of alternative methods (in silico, in chemico, and in vitro) to identify contact and respiratory sensitizing capacity and in some occasions give a measure of potency. The past few years have seen major advances in QSAR (quantitative structure-activity relationship) models where especially mechanism-based models have great potential, peptide reactivity assays where multiple parameters can be measured simultaneously, providing a more complete reactivity profile, and cell-based assays. Several cell-based assays are in development, not only using different cell types, but also several specifically developed assays such as three-dimenionally (3D)-reconstituted skin models, an antioxidant response reporter assay, determination of signaling pathways, and gene profiling. Some of these assays show relatively high sensitivity and specificity for a large number of sensitizers and should enter validation (or are indeed entering this process). Integrating multiple assays in a decision tree or integrated testing system is a next step, but has yet to be developed. Adequate risk assessment, however, is likely to require significantly more time and efforts.

  13. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  14. Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis

    International Nuclear Information System (INIS)

    Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.

    1991-01-01

    The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs

  15. Effects of oxygen on intrinsic radiation sensitivity: A test of the relationship between aerobic and hypoxic linear-quadratic (LQ) model parameters

    International Nuclear Information System (INIS)

    Carlson, David J.; Stewart, Robert D.; Semenenko, Vladimir A.

    2006-01-01

    The poor treatment prognosis for tumors with high levels of hypoxia is usually attributed to the decreased sensitivity of hypoxic cells to ionizing radiation. Mechanistic considerations suggest that linear quadratic (LQ) survival model radiosensitivity parameters for hypoxic (H) and aerobic (A) cells are related by α H =α A /oxygen enhancement ratio (OER) and (α/β) H =OER(α/β) A . The OER parameter may be interpreted as the ratio of the dose to the hypoxic cells to the dose to the aerobic cells required to produce the same number of DSBs per cell. The validity of these expressions is tested against survival data for mammalian cells irradiated in vitro with low- and high-LET radiation. Estimates of hypoxic and aerobic radiosensitivity parameters are derived from independent and simultaneous least-squares fits to the survival data. An external bootstrap procedure is used to test whether independent fits to the survival data give significantly better predictions than simultaneous fits to the aerobic and hypoxic data. For low-LET radiation, estimates of the OER derived from the in vitro data are between 2.3 and 3.3 for extreme levels of hypoxia. The estimated range for the OER is similar to the oxygen enhancement ratios reported in the literature for the initial yield of DSBs. The half-time for sublethal damage repair was found to be independent of oxygen concentration. Analysis of patient survival data for cervix cancer suggests an average OER less than or equal to 1.5, which corresponds to a pO 2 of 5 mm Hg (0.66%) in the in vitro experiments. Because the OER derived from the cervix cancer data is averaged over cells at all oxygen levels, cells irradiated in vivo under extreme levels of hypoxia (<0.5 mm Hg) may have an OER substantially higher than 1.5. The reported analyses of in vitro data, as well as mechanistic considerations, provide strong support for the expressions relating hypoxic and aerobic radiosensitivity parameters. The formulas are also useful

  16. Highly sensitive silicon microreactor for catalyst testing

    DEFF Research Database (Denmark)

    Henriksen, Toke Riishøj; Olsen, Jakob Lind; Vesborg, Peter Christian Kjærgaard

    2009-01-01

    by directing the entire gas flow through the catalyst bed to a mass spectrometer, thus ensuring that nearly all reaction products are present in the analyzed gas flow. Although the device can be employed for testing a wide range of catalysts, the primary aim of the design is to allow characterization of model...... catalysts which can only be obtained in small quantities. Such measurements are of significant fundamental interest but are challenging because of the low surface areas involved. The relationship between the reaction zone gas flow and the pressure in the reaction zone is investigated experimentally......, it is found that platinum catalysts with areas as small as 15 mu m(2) are conveniently characterized with the device. (C) 2009 American Institute of Physics. [doi:10.1063/1.3270191]...

  17. Development of an artificial neural network model for risk assessment of skin sensitization using human cell line activation test, direct peptide reactivity assay, KeratinoSens™ and in silico structure alert parameter.

    Science.gov (United States)

    Hirota, Morihiko; Ashikaga, Takao; Kouzuki, Hirokazu

    2018-04-01

    It is important to predict the potential of cosmetic ingredients to cause skin sensitization, and in accordance with the European Union cosmetic directive for the replacement of animal tests, several in vitro tests based on the adverse outcome pathway have been developed for hazard identification, such as the direct peptide reactivity assay, KeratinoSens™ and the human cell line activation test. Here, we describe the development of an artificial neural network (ANN) prediction model for skin sensitization risk assessment based on the integrated testing strategy concept, using direct peptide reactivity assay, KeratinoSens™, human cell line activation test and an in silico or structure alert parameter. We first investigated the relationship between published murine local lymph node assay EC3 values, which represent skin sensitization potency, and in vitro test results using a panel of about 134 chemicals for which all the required data were available. Predictions based on ANN analysis using combinations of parameters from all three in vitro tests showed a good correlation with local lymph node assay EC3 values. However, when the ANN model was applied to a testing set of 28 chemicals that had not been included in the training set, predicted EC3s were overestimated for some chemicals. Incorporation of an additional in silico or structure alert descriptor (obtained with TIMES-M or Toxtree software) in the ANN model improved the results. Our findings suggest that the ANN model based on the integrated testing strategy concept could be useful for evaluating the skin sensitization potential. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Sensitivity Test of Parameters Influencing Flood Hydrograph Routing with a Diffusion-Wave Distributed using Distributed Hydrological Model, Wet Spa, in Ziarat Watershed

    Directory of Open Access Journals (Sweden)

    narges javidan

    2017-02-01

    , evapotranspiration, temperature, and discharge are used as inputs. Additionally, three main maps of the digital elevation model, soil map (texture, and landuse are also applied and converted to digital formats. The result of the simulation shows a good agreement between the simulated hydrography and the observed one. The routing of overland flow and channel flow is implemented by the method of the diffusive wave approximation. A sensitivity test shows that the parameter of flood frequency and the channel roughness coefficient have a large influence on the outflow hydrography and the calculated watershed unit hydrograph, while the threshold of minimum slope and the threshold of drainage area in delineating channel networks have a marginal effect.

  19. Lamb waves increase sensitivity in nondestructive testing

    Science.gov (United States)

    Di Novi, R.

    1967-01-01

    Lamb waves improve sensitivity and resolution in the detection of small defects in thin plates and small diameter, thin-walled tubing. This improvement over shear waves applies to both longitudinal and transverse flaws in the specimens.

  20. Sensitivity tests on the rates of the excited states of positron decays during the rapid proton capture process of the one-zone X-ray burst model

    Science.gov (United States)

    Lau, Rita

    2018-02-01

    In this paper, we investigate the sensitivities of positron decays on a one-zone model of type-I X-ray bursts. Most existing studies have multiplied or divided entire beta decay rates (electron captures and beta decay rates) by 10. Instead of using the standard Fuller & Fowler (FFNU) rates, we used the most recently developed weak library rates [1], which include rates from Langanke et al.'s table (the LMP table) (2000) [2], Langanke et al.'s table (the LMSH table) (2003) [3], and Oda et al.'s table (1994) [4] (all shell model rates). We then compared these table rates with the old FFNU rates [5] to study differences within the final abundances. Both positron decays and electron capture rates were included in the tables. We also used pn-QRPA rates [6,7] to study the differences within the final abundances. Many of the positron rates from the nuclei's ground states and initial excited energy states along the rapid proton capture (rp) process have been measured in existing studies. However, because temperature affects the rates of excited states, these studies should have also acknowledged the half-lives of the nuclei's excited states. Thus, instead of multiplying or dividing entire rates by 10, we studied how the half-lives of sensitive nuclei in excited states affected the abundances by dividing the half-lives of the ground states by 10, which allowed us to set the half-lives of the excited states. Interestingly, we found that the peak of the final abundance shifted when we modified the rates from the excited states of the 105Sn positron decay rates. Furthermore, the abundance of 80Zr also changed due to usage of pn-QRPA rates instead of weak library rates (the shell model rates).

  1. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  2. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  3. Precipitates/Salts Model Sensitivity Calculation

    Energy Technology Data Exchange (ETDEWEB)

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  4. Sensitivity analysis using DECOMP and METOXA subroutines of the MAAP code in modelling core concrete interaction phenomena and post test calculations for ACE-MCCI experiment L-5

    International Nuclear Information System (INIS)

    Passalacqua, R.A.

    1991-01-01

    A parametric analysis approach was chosen in order to study core-concrete interaction phenomena. The analysis was performed using a stand-alone version of the MAAP-DECOMP model (DOE version). This analysis covered only those parameters known to have the largest effect on thermohydraulics and fission product aerosol release. Even though the main purpose of the effort was model validation, it eventually resulted in a better understanding of the core-concrete interaction physics and to a more correct interpretation of the ACE-MCCI L5 experimental data. Unusual low heat transfer fluxes from the debris pool to the cavity (corium surrounding volume) were modeled in order to have a good benchmark with the experimental data. Therefore, higher debris pool temperatures were predicted. In case of water flooding, as a consequence of the critical heat flux through the upper crust and the increase of the crust thickness, resulting high debris pool temperatures cause an increase in the concrete ablation rate in the short term. DECOMP model predicts a quick increase of the crust thickness and as a result, causes the quenching of the molten mass. However, especially for fast transient, phenomena of crust bridge formation can occur. Thus, the upward directed heat flux is minimized and the concrete erosion rate remains conspicuous also in the long term. The model validation is based, in these calculations, on post-test predictions using the MCCI L5 test data: these data are derived from results of the 'Molten Core Concrete Interaction' (MCCI) experiments, which in turn are part of the larger Advanced Containment Experiment (ACE) program. Other calculations were also performed for the new proposed MACE (Melt Debris Attack and Coolability) experiments simulating the water flooding of the cavity. Those calculations are preliminarily compared with the recent MACE scoping test results. (author) 4 tabs., 59 figs., 5 refs

  5. Differential neuropsychological test sensitivity to left temporal lobe epilepsy.

    Science.gov (United States)

    Loring, David W; Strauss, Esther; Hermann, Bruce P; Barr, William B; Perrine, Kenneth; Trenerry, Max R; Chelune, Gordon; Westerveld, Michael; Lee, Gregory P; Meador, Kimford J; Bowden, Stephen C

    2008-05-01

    We examined the sensitivity of the Rey Auditory Verbal Learning Test (AVLT), California Verbal Learning Test (CVLT), Boston Naming Test (BNT), and Multilingual Aphasia Examination Visual Naming subtest (MAE VN) to lateralized temporal lobe epilepsy (TLE) in patients who subsequently underwent anterior temporal lobectomy. For the AVLT (n = 189), left TLE patients performed more poorly than their right TLE counterparts [left TLE = 42.9 (10.6), right TLE = 47.7 (9.9); p LTE = 40.7 (11.1), right TLE = 43.8 (9.9); (p measures of confrontation naming ability [BNT: left LTE = 43.1 (8.9), right TLE = 48.1 (8.9); p < .001 (Cohen's d = .56); MAE VN: left TLE = 42.2, right TLE = 45.6, p = .02 (Cohen's d = .36)]. When these data were modeled in independent logistic regression analyses, the AVLT and BNT both significantly predicted side of seizure focus, although the positive likelihood ratios were modest. In the subset of 108 patients receiving both BNT and AVLT, the AVLT was the only significant predictor of seizure laterality, suggesting individual patient variability regarding whether naming or memory testing may be more sensitive to lateralized TLE.

  6. An in vitro human skin test for assessing sensitization potential.

    Science.gov (United States)

    Ahmed, S S; Wang, X N; Fielding, M; Kerry, A; Dickinson, I; Munuswamy, R; Kimber, I; Dickinson, A M

    2016-05-01

    Sensitization to chemicals resulting in an allergy is an important health issue. The current gold-standard method for identification and characterization of skin-sensitizing chemicals was the mouse local lymph node assay (LLNA). However, for a number of reasons there has been an increasing imperative to develop alternative approaches to hazard identification that do not require the use of animals. Here we describe a human in-vitro skin explant test for identification of sensitization hazards and the assessment of relative skin sensitizing potency. This method measures histological damage in human skin as a readout of the immune response induced by the test material. Using this approach we have measured responses to 44 chemicals including skin sensitizers, pre/pro-haptens, respiratory sensitizers, non-sensitizing chemicals (including skin-irritants) and previously misclassified compounds. Based on comparisons with the LLNA, the skin explant test gave 95% specificity, 95% sensitivity, 95% concordance with a correlation coefficient of 0.9. The same specificity and sensitivity were achieved for comparison of results with published human sensitization data with a correlation coefficient of 0.91. The test also successfully identified nickel sulphate as a human skin sensitizer, which was misclassified as negative in the LLNA. In addition, sensitizers and non-sensitizers identified as positive or negative by the skin explant test have induced high/low T cell proliferation and IFNγ production, respectively. Collectively, the data suggests the human in-vitro skin explant test could provide the basis for a novel approach for characterization of the sensitizing activity as a first step in the risk assessment process. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Variation of a test's sensitivity and specificity with disease prevalence

    NARCIS (Netherlands)

    Leeflang, Mariska M. G.; Rutjes, Anne W. S.; Reitsma, Johannes B.; Hooft, Lotty; Bossuyt, Patrick M. M.

    2013-01-01

    Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23

  8. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  9. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  10. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  11. Sensitivity and Specificity of Clinical and Laboratory Otolith Function Tests.

    Science.gov (United States)

    Kumar, Lokesh; Thakar, Alok; Thakur, Bhaskar; Sikka, Kapil

    2017-10-01

    To evaluate clinic based and laboratory tests of otolith function for their sensitivity and specificity in demarcating unilateral compensated complete vestibular deficit from normal. Prospective cross-sectional study. Tertiary care hospital vestibular physiology laboratory. Control group-30 healthy adults, 20-45 years age; Case group-15 subjects post vestibular shwannoma excision or post-labyrinthectomy with compensated unilateral complete audio-vestibular loss. Otolith function evaluation by precise clinical testing (head tilt test-HTT; subjective visual vertical-SVV) and laboratory testing (headroll-eye counterroll-HR-ECR; vesibular evoked myogenic potentials-cVEMP). Sensitivity and specificity of clinical and laboratory tests in differentiating case and control subjects. Measurable test results were universally obtained with clinical otolith tests (SVV; HTT) but not with laboratory tests. The HR-ECR test did not indicate any definitive wave forms in 10% controls and 26% cases. cVEMP responses were absent in 10% controls.HTT test with normative cutoff at 2 degrees deviations from vertical noted as 93.33% sensitive and 100% specific. SVV test with normative cutoff at 1.3 degrees noted as 100% sensitive and 100% specific. Laboratory tests demonstrated poorer specificities owing primarily to significant unresponsiveness in normal controls. Clinical otolith function tests, if conducted with precision, demonstrate greater ability than laboratory testing in discriminating normal controls from cases with unilateral complete compensated vestibular dysfunction.

  12. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  13. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  14. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  15. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  16. Validity, Reliability, and Sensitivity of a Volleyball Intermittent Endurance Test.

    Science.gov (United States)

    Rodríguez-Marroyo, Jose A; Medina-Carrillo, Javier; García-López, Juan; Morante, Juan C; Villa, José G; Foster, Carl

    2017-03-01

    To analyze the concurrent and construct validity of a volleyball intermittent endurance test (VIET). The VIET's test-retest reliability and sensitivity to assess seasonal changes was also studied. During the preseason, 71 volleyball players of different competitive levels took part in this study. All performed the VIET and a graded treadmill test with gas-exchange measurement (GXT). Thirty-one of the players performed an additional VIET to analyze the test-retest reliability. To test the VIET's sensitivity, 28 players repeated the VIET and GXT at the end of their season. Significant (P volleyball players.

  17. The Development and Validation of the Vocalic Sensitivity Test.

    Science.gov (United States)

    Villaume, William A.; Brown, Mary Helen

    1999-01-01

    Notes that presbycusis, hearing loss associated with aging, may be marked by a second dimension of hearing loss, a loss in vocalic sensitivity. Reports on the development of the Vocalic Sensitivity Test, which controls for the verbal elements in speech while also allowing for the vocalics to exercise their normal metacommunicative function of…

  18. Evaluating the Instructional Sensitivity of Four States' Student Achievement Tests

    Science.gov (United States)

    Polikoff, Morgan S.

    2016-01-01

    As state tests of student achievement are used for an increasingly wide array of high- and low-stakes purposes, evaluating their instructional sensitivity is essential. This article uses data from the Bill and Melinda Gates Foundation's Measures of Effective Project to examine the instructional sensitivity of 4 states' mathematics and English…

  19. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  20. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  1. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  2. Vendor Testing of Sensitive Compounds in Simulated Dry Sludge

    International Nuclear Information System (INIS)

    Dworjanyn, L.O.

    1999-01-01

    This assessment covers thermal screening, differential scanning calorimetry, and impact sensitivity testing on Mercury Fulminate, and mixtures of the fulminate in dry inorganic sludge, which is present in large quantities in a number of storage tanks at Westinghouse Savannah River

  3. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  4. Testing the standard model

    International Nuclear Information System (INIS)

    Gordon, H.; Marciano, W.; Williams, H.H.

    1982-01-01

    We summarize here the results of the standard model group which has studied the ways in which different facilities may be used to test in detail what we now call the standard model, that is SU/sub c/(3) x SU(2) x U(1). The topics considered are: W +- , Z 0 mass, width; sin 2 theta/sub W/ and neutral current couplings; W + W - , Wγ; Higgs; QCD; toponium and naked quarks; glueballs; mixing angles; and heavy ions

  5. Testing the Perturbation Sensitivity of Abortion-Crime Regressions

    Directory of Open Access Journals (Sweden)

    Michał Brzeziński

    2012-06-01

    Full Text Available The hypothesis that the legalisation of abortion contributed significantly to the reduction of crime in the United States in 1990s is one of the most prominent ideas from the recent “economics-made-fun” movement sparked by the book Freakonomics. This paper expands on the existing literature about the computational stability of abortion-crime regressions by testing the sensitivity of coefficients’ estimates to small amounts of data perturbation. In contrast to previous studies, we use a new data set on crime correlates for each of the US states, the original model specifica-tion and estimation methodology, and an improved data perturbation algorithm. We find that the coefficients’ estimates in abortion-crime regressions are not computationally stable and, therefore, are unreliable.

  6. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  7. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  8. ATLAS MDT neutron sensitivity measurement and modeling

    International Nuclear Information System (INIS)

    Ahlen, S.; Hu, G.; Osborne, D.; Schulz, A.; Shank, J.; Xu, Q.; Zhou, B.

    2003-01-01

    The sensitivity of the ATLAS precision muon detector element, the Monitored Drift Tube (MDT), to fast neutrons has been measured using a 5.5 MeV Van de Graaff accelerator. The major mechanism of neutron-induced signals in the drift tubes is the elastic collisions between the neutrons and the gas nuclei. The recoil nuclei lose kinetic energy in the gas and produce the signals. By measuring the ATLAS drift tube neutron-induced signal rate and the total neutron flux, the MDT neutron signal sensitivities were determined for different drift gas mixtures and for different neutron beam energies. We also developed a sophisticated simulation model to calculate the neutron-induced signal rate and signal spectrum for ATLAS MDT operation configurations. The calculations agree with the measurements very well. This model can be used to calculate the neutron sensitivities for different gaseous detectors and for neutron energies above those available to this experiment

  9. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  10. Retrospective evaluation of the consequence of alleged patch test sensitization

    DEFF Research Database (Denmark)

    Jensen, Charlotte D; Paulsen, Evy; Andersen, Klaus E

    2006-01-01

    The risk of actively sensitizing a patient in connection with diagnostic patch tests exists. This risk, however, is extremely low, especially from standard allergens, and if the test is carried out according to internationally accepted guidelines. This retrospective study investigates the clinical...... or available for the follow-up investigation and 3 patients were not traceable. Among the 14 remaining patients 1 had a reaction to gold sodium thiosulphate, which was assessed to be a persistent reaction and not a late reaction, and in 2 patients a clear relevance for the late reacting allergen was found....... For the remaining 11 patients we could not rule out that they were patch test sensitized, and they were investigated further. 1 was diseased and 10 were interviewed regarding the possible consequences of the alleged patch test sensitization. 9 had not experienced any dermatitis problems, and 1 could not exclude...

  11. The sensitivity of the ESA DELTA model

    Science.gov (United States)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  12. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  13. Sensitivity study on hydraulic well testing inversion using simulated annealing

    International Nuclear Information System (INIS)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion

  14. Sensitivity study on hydraulic well testing inversion using simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.

  15. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  16. Sensitivity and specificity of the nickel spot (dimethylglyoxime) test.

    Science.gov (United States)

    Thyssen, Jacob P; Skare, Lizbet; Lundgren, Lennart; Menné, Torkil; Johansen, Jeanne D; Maibach, Howard I; Lidén, Carola

    2010-05-01

    The accuracy of the dimethylglyoxime (DMG) nickel spot test has been questioned because of false negative and positive test reactions. The EN 1811, a European standard reference method developed by the European Committee for Standardization (CEN), is fine-tuned to estimate nickel release around the limit value of the EU Nickel Directive from products intended to come into direct and prolonged skin contact. Because assessments according to EN 1811 are expensive to perform, time consuming, and may destruct the test item, it should be of great value to know the accuracy of the DMG screening test. To evaluate the sensitivity and specificity of the DMG test. DMG spot testing, chemical analysis according to the EN 1811 reference method, and X-ray fluorescence spectroscopy (XRF) were performed concomitantly on 96 metallic components from earrings recently purchased in San Francisco. The sensitivity of the DMG test was 59.3% and the specificity was 97.5% based on DMG-test results and nickel release concentrations determined by the EN 1811 reference method. The DMG test has a high specificity but a modest sensitivity. It may serve well for screening purposes. Past exposure studies may have underestimated nickel release from consumer items.

  17. Low sensitivity of glucagon provocative testing for diagnosis of pheochromocytoma.

    Science.gov (United States)

    Lenders, Jacques W M; Pacak, Karel; Huynh, Thanh-Truc; Sharabi, Yehonatan; Mannelli, Massimo; Bratslavsky, Gennady; Goldstein, David S; Bornstein, Stefan R; Eisenhofer, Graeme

    2010-01-01

    Pheochromocytomas can usually be confirmed or excluded using currently available biochemical tests of catecholamine excess. Follow-up tests are, nevertheless, often required to distinguish false-positive from true-positive results. The glucagon stimulation test represents one such test; its diagnostic utility is, however, unclear. The aim of the study was to determine the diagnostic power of the glucagon test to exclude or confirm pheochromocytoma. Glucagon stimulation tests were carried out at three specialist referral centers in 64 patients with pheochromocytoma, 38 patients in whom the tumor was excluded, and in a reference group of 36 healthy volunteers. Plasma concentrations of norepinephrine and epinephrine were measured before and after glucagon administration. Several absolute and relative test criteria were used for calculating diagnostic sensitivity and specificity. Expression of the glucagon receptor was examined in pheochromocytoma tumor tissue from a subset of patients. Larger than 3-fold increases in plasma norepinephrine after glucagon strongly predicted the presence of a pheochromocytoma (100% specificity and positive predictive value). However, irrespective of the various criteria examined, glucagon-provoked increases in plasma catecholamines revealed the presence of the tumor in less than 50% of affected patients. Diagnostic sensitivity was particularly low in patients with pheochromocytomas due to von Hippel-Lindau syndrome. Tumors from these patients showed no significant expression of the glucagon receptor. The glucagon stimulation test offers insufficient diagnostic sensitivity for reliable exclusion or confirmation of pheochromocytoma. Because of this and the risk of hypertensive complications, the test should be abandoned in routine clinical practice.

  18. Pressure-Sensitive Paints Advance Rotorcraft Design Testing

    Science.gov (United States)

    2013-01-01

    The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure

  19. The sensitivity of patch test in patients with psoriasis

    Directory of Open Access Journals (Sweden)

    Yavuz Yeşilova

    2010-09-01

    Full Text Available Objectives: Allergic diseases play an important role in the natural course of psoriasis. Atopic sensitization and con-tact dermatitis are common in patients with psoriasis. Since the symptoms are prolonged in patients who are resistant to therapy and exposure to itchy and external factors are common among these patients, the effects of contact aller-gens on triggering psoriasis are investigated. Contact allergens have an important role in activation and remission of psoriasis. We aimed to investigate contact sensitization rates in patients with psoriasis in the study.Material and Methods: Contact sensitization was investigated with the application of European standard series in twenty patients with psoriasis, twenty patients with contact dermatitis, and twenty healthy persons. Results: Among the whole study cases, positivity rate of patch test against one allergen at least was 25%. rate of patch test was 25% in patients with psoriasis, 35% in patients with contact dermatitis, and 15% in healthy persons. There were no significant differences between the groups according to sensitization to one or more allergens (p>0.05. There were no significant difference in clinical subgroup of psoriatic patients according to contact sensitiza-tion (p>0.05. The allergens in patients with psoriasis on patch test were as the followings: phenyldiamine, potassium dichromat, nickel, and cobalt.Conclusion: We think that the patch test has a major role in the diagnosis and elimination of allergens in patients with the chronic and resistant diseases and palmoplantar and flexural psoriasis.

  20. Radiation Belt Test Model

    Science.gov (United States)

    Freeman, John W.

    2000-10-01

    Rice University has developed a dynamic model of the Earth's radiation belts based on real-time data driven boundary conditions and full adiabaticity. The Radiation Belt Test Model (RBTM) successfully replicates the major features of storm-time behavior of energetic electrons: sudden commencement induced main phase dropout and recovery phase enhancement. It is the only known model to accomplish the latter. The RBTM shows the extent to which new energetic electrons introduced to the magnetosphere near the geostationary orbit drift inward due to relaxation of the magnetic field. It also shows the effects of substorm related rapid motion of magnetotail field lines for which the 3rd adiabatic invariant is violated. The radial extent of this violation is seen to be sharply delineated to a region outside of 5Re, although this distance is determined by the Hilmer-Voigt magnetic field model used by the RBTM. The RBTM appears to provide an excellent platform on which to build parameterized refinements to compensate for unknown acceleration processes inside 5Re where adiabaticity is seen to hold. Moreover, built within the framework of the MSFM, it offers the prospect of an operational forecast model for MeV electrons.

  1. Drug sensitivity testing platforms for gastric cancer diagnostics.

    Science.gov (United States)

    Lau, Vianne; Wong, Andrea Li-Ann; Ng, Christopher; Mok, Yingting; Lakshmanan, Manikandan; Yan, Benedict

    2016-02-01

    Gastric cancer diagnostics has traditionally been histomorphological and primarily the domain of surgical pathologists. Although there is an increasing usage of molecular and genomic techniques for clinical diagnostics, there is an emerging field of personalised drug sensitivity testing. In this review, we describe the various personalised drug sensitivity testing platforms and discuss the challenges facing clinical adoption of these assays for gastric cancer. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  3. Experimental-based Modelling and Simulation of Water Hydraulic Mechatronics Test Facilities for Motion Control and Operation in Environmental Sensitive Applications` Areas

    DEFF Research Database (Denmark)

    Conrad, Finn; Pobedza, J.; Sobczyk, A.

    2003-01-01

    The paper presents experimental-based modelling, simulation, analysis and design of water hydraulic actuators for motion control of machines, lifts, cranes and robots. The contributions includes results from on-going research projects on fluid power and mechatronics based on tap water hydraulic...

  4. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  5. Reliable and sensitive physical testing of elite trapeze sailors

    DEFF Research Database (Denmark)

    Bay, Jonathan; Bojsen-Møller, Jens; Nordsborg, Nikolai Baastrup

    2018-01-01

    It was investigated, if a newly developed discipline specific test for elite-level trapeze sailors is reli-able and sensitive. Furthermore, the physical demands of trapeze sailing were examined. In part 1, nine national team athletes were accustomed to a simulated sailing test, which subsequently....... 265 ± 45W, Psailing was 54.5 ± 7.2% VO2max , 75.1 ± 3.1% HRmax and 5.8 ± 2.7 mM, respectively. However, VO2 and HR were substantially higher for periods of the race...... as peak values were 83.5 ± 11.4% and 89.9 ± 1.7% of max, respectively. In conclusion, the present test is reliable and sensitive, thus providing a sailing specific alternative to traditional physical testing of elite trapeze sailors. Additionally, on-water rac-ing requires moderate aerobic energy...

  6. The local lymph node assay and skin sensitization testing.

    Science.gov (United States)

    Kimber, Ian; Dearman, Rebecca J

    2010-01-01

    The mouse local lymph node assay (LLNA) is a method for the identification and characterization of skin sensitization hazards. In this context the method can be used both to identify contact allergens, and also determine the relative skin sensitizing potency as a basis for derivation of effective risk assessments.The assay is based on measurement of proliferative responses by draining lymph node cells induced following topical exposure of mice to test chemicals. Such responses are known to be causally and quantitatively associated with the acquisition of skin sensitization and therefore provide a relevant marker for characterization of contact allergic potential.The LLNA has been the subject of exhaustive evaluation and validation exercises and has been assigned Organization for Economic Cooperation and Development (OECD) test guideline 429. Herein we describe the conduct and interpretation of the LLNA.

  7. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  8. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  9. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  10. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  11. Sensitivity of system stability to model structure

    Science.gov (United States)

    Hosack, G.R.; Li, H.W.; Rossignol, P.A.

    2009-01-01

    A community is stable, and resilient, if the levels of all community variables can return to the original steady state following a perturbation. The stability properties of a community depend on its structure, which is the network of direct effects (interactions) among the variables within the community. These direct effects form feedback cycles (loops) that determine community stability. Although feedback cycles have an intuitive interpretation, identifying how they form the feedback properties of a particular community can be intractable. Furthermore, determining the role that any specific direct effect plays in the stability of a system is even more daunting. Such information, however, would identify important direct effects for targeted experimental and management manipulation even in complex communities for which quantitative information is lacking. We therefore provide a method that determines the sensitivity of community stability to model structure, and identifies the relative role of particular direct effects, indirect effects, and feedback cycles in determining stability. Structural sensitivities summarize the degree to which each direct effect contributes to stabilizing feedback or destabilizing feedback or both. Structural sensitivities prove useful in identifying ecologically important feedback cycles within the community structure and for detecting direct effects that have strong, or weak, influences on community stability. The approach may guide the development of management intervention and research design. We demonstrate its value with two theoretical models and two empirical examples of different levels of complexity. ?? 2009 Elsevier B.V. All rights reserved.

  12. Sensitivity and specificity of neuropsychological tests for dementia

    African Journals Online (AJOL)

    specificity of a battery of neuropsychological tests in a sample of elderly persons living in a ... estimate of 20% prevalence for dementia in residential homes ... demographic variables, and mean neuro- psychological .... on optimum balance between sensitivity and specificity (Fig. 1). ..... The lack of stratification of the sample.

  13. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  14. Specificity and sensitivity assessment of selected nasal provocation testing techniques

    Directory of Open Access Journals (Sweden)

    Edyta Krzych-Fałta

    2016-12-01

    Full Text Available Introduction: Nasal provocation testing involves an allergen-specific local reaction of the nasal mucosa to the administered allergen. Aim: To determine the most objective nasal occlusion assessment technique that could be used in nasal provocation testing. Material and methods : A total of 60 subjects, including 30 patients diagnosed with allergy to common environmental allergens and 30 healthy subjects were enrolled into the study. The method used in the study was a nasal provocation test with an allergen, with a standard dose of a control solution and an allergen (5,000 SBU/ml administered using a calibrated atomizer into both nostrils at room temperature. Early-phase nasal mucosa response in the early phase of the allergic reaction was assessed via acoustic rhinometry, optical rhinometry, nitric oxide in nasal air, and tryptase levels in the nasal lavage fluid. Results : In estimating the homogeneity of the average values, the Levene’s test was used and receiver operating characteristic curves were plotted for all the methods used for assessing the nasal provocation test with an allergen. Statistically significant results were defined for p < 0.05. Of all the objective assessment techniques, the most sensitive and characteristic ones were the optical rhinometry techniques (specificity = 1, sensitivity = 1, AUC = 1, PPV = 1, NPV = 1. Conclusions : The techniques used showed significant differences between the group of patients with allergic rhinitis and the control group. Of all the objective assessment techniques, those most sensitive and characteristic were the optical rhinometry.

  15. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  16. Efficient Noninferiority Testing Procedures for Simultaneously Assessing Sensitivity and Specificity of Two Diagnostic Tests

    Directory of Open Access Journals (Sweden)

    Guogen Shan

    2015-01-01

    Full Text Available Sensitivity and specificity are often used to assess the performance of a diagnostic test with binary outcomes. Wald-type test statistics have been proposed for testing sensitivity and specificity individually. In the presence of a gold standard, simultaneous comparison between two diagnostic tests for noninferiority of sensitivity and specificity based on an asymptotic approach has been studied by Chen et al. (2003. However, the asymptotic approach may suffer from unsatisfactory type I error control as observed from many studies, especially in small to medium sample settings. In this paper, we compare three unconditional approaches for simultaneously testing sensitivity and specificity. They are approaches based on estimation, maximization, and a combination of estimation and maximization. Although the estimation approach does not guarantee type I error, it has satisfactory performance with regard to type I error control. The other two unconditional approaches are exact. The approach based on estimation and maximization is generally more powerful than the approach based on maximization.

  17. Testing of new hypoxic cell sensitizers in vivo

    International Nuclear Information System (INIS)

    Stone, H.B.; Sinesi, M.S.

    1982-01-01

    We tested five agents as potential sensitizers of hypoxic cells in vivo in mammary tumors in C3H mice in comparison with misonidazole. The LD/sub 50/2/ for desmethylmisonidazole was 2.7 mg/g body wt, compared to 1.3 for misonidazole. It was as effective in reducing the TCD 50 of MDAH-MCa-4 as were equitoxic doses of misonidazole. the LD/sub 50/2/ of SR-2508 was 3.3 mg/g and was as effective a sensitizer as misonidazole. Ro 07-0741 was more toxic, with an LD/sub 50/2/ of 0.6 mg/g, but was as effective as misonidazole at equitoxic doses. NP-1 was also more toxic than misonidazole (LA/sub 50/2/ = 04 mg/g) but was a less effective sensitizer. Rotenone, which causes sensitization by inhibiting cellular respiration, thus increasing the diffusion distance of oxygen, was extremely toxic (LD/sub 50/2/ - 0.003 mg/g), and systemic respiratory inhibition and the radioprotective effects of the dimethyl sulfoxide used to dissolve it rendered it totally ineffective as a sensitizer in vivo

  18. Indicators of Ceriodaphnia dubia chronic toxicity test performance and sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Rosebrock, M.M.; Bedwell, N.J.; Ausley, L.W. [North Carolina Division of Environmental Management, Raleigh, NC (United States)

    1994-12-31

    The North Carolina Division of Environmental Management has begun evaluation of the sensitivity of test results used for measuring chronic whole effluent toxicity in North Carolina wastewater discharges. Approximately 67% of 565 facilities required to monitor toxicity by an NPDES permit perform a Ceriodaphnia dubia chronic, single effluent concentration (pass/fail) analysis. Data from valid Ceriodaphnia dubia chronic pass/fail tests performed by approximately 20 certified biological laboratories and submitted by North Carolina NPDES permittees were recorded beginning January 1992. Control and treatment reproduction data from over 2,500 tests submitted since 1992 were analyzed to determine the minimum significant difference (MSD) at a 99% confidence level for each test and the percent reduction from the control mean that the MSD represents (%MSD) for each certified laboratory. Initial results for the 20 laboratories indicate that the average intralaboratory percent MSD ranges 12.72% (n = 367) to 34.91% (n = 7) with an average of 23.08%. Additionally, over 3,800 tests were analyzed to determine the coefficient of variation (CV) for control reproduction for each test and the average for each certified biological laboratory. Preliminary review indicates that average interlaboratory control reproduction CV values range from 10.59% (n = 367) to 31.08% (n = 572) with a mean of 20.35%. The statistics investigated are indicators of intra/interlaboratory performance and sensitivity of Ceriodaphnia chronic toxicity analyses.

  19. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  20. High sensitive quench detection method using an integrated test wire

    International Nuclear Information System (INIS)

    Fevrier, A.; Tavergnier, J.P.; Nithart, H.; Kiblaire, M.; Duchateau, J.L.

    1981-01-01

    A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field

  1. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  2. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  3. Development, Testing, and Sensitivity and Uncertainty Analyses of a Transport and Reaction Simulation Engine (TaRSE) for Spatially Distributed Modeling of Phosphorus in South Florida Peat Marsh Wetlands

    Science.gov (United States)

    Jawitz, James W.; Munoz-Carpena, Rafael; Muller, Stuart; Grace, Kevin A.; James, Andrew I.

    2008-01-01

    in the phosphorus cycling mechanisms were simulated in these case studies using different combinations of phosphorus reaction equations. Changes in water column phosphorus concentrations observed under the controlled conditions of laboratory incubations, and mesocosm studies were reproduced with model simulations. Short-term phosphorus flux rates and changes in phosphorus storages were within the range of values reported in the literature, whereas unknown rate constants were used to calibrate the model output. In STA-1W Cell 4, the dominant mechanism for phosphorus flow and transport is overland flow. Over many life cycles of the biological components, however, soils accrue and become enriched in phosphorus. Inflow total phosphorus concentrations and flow rates for the period between 1995 and 2000 were used to simulate Cell 4 phosphorus removal, outflow concentrations, and soil phosphorus enrichment over time. This full-scale application of the model successfully incorporated parameter values derived from the literature and short-term experiments, and reproduced the observed long-term outflow phosphorus concentrations and increased soil phosphorus storage within the system. A global sensitivity and uncertainty analysis of the model was performed using modern techniques such as a qualitative screening tool (Morris method) and the quantitative, variance-based, Fourier Amplitude Sensitivity Test (FAST) method. These techniques allowed an in-depth exploration of the effect of model complexity and flow velocity on model outputs. Three increasingly complex levels of possible application to southern Florida were studied corresponding to a simple soil pore-water and surface-water system (level 1), the addition of plankton (level 2), and of macrophytes (level 3). In the analysis for each complexity level, three surface-water velocities were considered that each correspond to residence times for the selected area (1-kilometer long) of 2, 10, and 20

  4. High sensitivity pyrogen testing in water and dialysis solutions.

    Science.gov (United States)

    Daneshian, Mardas; Wendel, Albrecht; Hartung, Thomas; von Aulock, Sonja

    2008-07-20

    The dialysis patient is confronted with hundreds of litres of dialysis solution per week, which pass the natural protective barriers of the body and are brought into contact with the tissue directly in the case of peritoneal dialysis or indirectly in the case of renal dialysis (hemodialysis). The components can be tested for living specimens or dead pyrogenic (fever-inducing) contaminations. The former is usually detected by cultivation and the latter by the endotoxin-specific Limulus Amoebocyte Lysate Assay (LAL). However, the LAL assay does not reflect the response of the human immune system to the wide variety of possible pyrogenic contaminations in dialysis fluids. Furthermore, the test is limited in its sensitivity to detect extremely low concentrations of pyrogens, which in their sum result in chronic pathologies in dialysis patients. The In vitro Pyrogen Test (IPT) employs human whole blood to detect the spectrum of pyrogens to which humans respond by measuring the release of the endogenous fever mediator interleukin-1beta. Spike recovery checks exclude interference. The test has been validated in an international study for pyrogen detection in injectable solutions. In this study we adapted the IPT to the testing of dialysis solutions. Preincubation of 50 ml spiked samples with albumin-coated microspheres enhanced the sensitivity of the assay to detect contaminations down to 0.1 pg/ml LPS or 0.001 EU/ml in water or saline and allowed pyrogen detection in dialysis concentrates or final working solutions. This method offers high sensitivity detection of human-relevant pyrogens in dialysis solutions and components.

  5. Field test investigation of high sensitivity fiber optic seismic geophone

    Science.gov (United States)

    Wang, Meng; Min, Li; Zhang, Xiaolei; Zhang, Faxiang; Sun, Zhihui; Li, Shujuan; Wang, Chang; Zhao, Zhong; Hao, Guanghu

    2017-10-01

    Seismic reflection, whose measured signal is the artificial seismic waves ,is the most effective method and widely used in the geophysical prospecting. And this method can be used for exploration of oil, gas and coal. When a seismic wave travelling through the Earth encounters an interface between two materials with different acoustic impedances, some of the wave energy will reflect off the interface and some will refract through the interface. At its most basic, the seismic reflection technique consists of generating seismic waves and measuring the time taken for the waves to travel from the source, reflect off an interface and be detected by an array of geophones at the surface. Compared to traditional geophones such as electric, magnetic, mechanical and gas geophone, optical fiber geophones have many advantages. Optical fiber geophones can achieve sensing and signal transmission simultaneously. With the development of fiber grating sensor technology, fiber bragg grating (FBG) is being applied in seismic exploration and draws more and more attention to its advantage of anti-electromagnetic interference, high sensitivity and insensitivity to meteorological conditions. In this paper, we designed a high sensitivity geophone and tested its sensitivity, based on the theory of FBG sensing. The frequency response range is from 10 Hz to 100 Hz and the acceleration of the fiber optic seismic geophone is over 1000pm/g. sixteen-element fiber optic seismic geophone array system is presented and the field test is performed in Shengli oilfield of China. The field test shows that: (1) the fiber optic seismic geophone has a higher sensitivity than the traditional geophone between 1-100 Hz;(2) The low frequency reflection wave continuity of fiber Bragg grating geophone is better.

  6. Leakage localisation method in a water distribution system based on sensitivity matrix: methodology and real test

    OpenAIRE

    Pascual Pañach, Josep

    2010-01-01

    Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...

  7. Sensitivity of MRQAP tests to collinearity and autocorrelation conditions

    NARCIS (Netherlands)

    Dekker, David; Krackhardt, David; Snijders, Tom A. B.

    2007-01-01

    Multiple regression quadratic assignment procedures (MRQAP) tests are permutation tests for multiple linear regression model coefficients for data organized in square matrices of relatedness among n objects. Such a data structure is typical in social network studies, where variables indicate some

  8. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  9. Sensitivity of BWR shutdown margin tests to local reactivity anomalies

    International Nuclear Information System (INIS)

    Cokinos, D.M.; Carew, J.F.

    1987-01-01

    Successful shutdown margin (SDM) demonstration is a required procedure in the startup of a newly configured boiling water reactor (BWR) core. In its most reactive condition throughout a cycle, a BWR core must be capable of being made subcritical by a specified margin with the highest worth control rod fully withdrawn and all other rods at their fully inserted positions. Two different methods are used to demonstrate SDM: (a) the adjacent-rod test and (b) the in-sequence test. In the adjacent-rod test, the strongest rod is fully withdrawn and an adjacent rod is withdrawn to reach criticality. In the in-sequence test, control rods spread throughout the core are withdrawn in a predetermined sequence of withdrawals. Larger than expected core k/sub eff/ values have been observed during the performance of BWR SDM tests. The purpose of the work summarized in this paper has been to investigated and quantify the sensitivity of both the adjacent-rod and in-sequence SDM tests to local reactivity anomalies. This was accomplished by introducing reactivity perturbations at selected four-bundle cell locations and by evaluating their effect on core reactivity in each of the two tests

  10. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  11. Modelling survival: exposure pattern, species sensitivity and uncertainty.

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G

    2016-07-06

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  12. 78 FR 68076 - Request for Information on Alternative Skin Sensitization Test Methods and Testing Strategies and...

    Science.gov (United States)

    2013-11-13

    ... Laboratory for Alternatives to Animal Testing (EURL ECVAM), and by the industry organization Cosmetics Europe... products. Pesticides and other marketed chemicals, including cosmetic ingredients, are routinely tested for... sensitization. NICEATM collaboration with industry scientists to develop an open-source Bayesian network as an...

  13. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  14. Sensitive Superconducting Gravity Gradiometer Constructed with Levitated Test Masses

    Science.gov (United States)

    Griggs, C. E.; Moody, M. V.; Norton, R. S.; Paik, H. J.; Venkateswara, K.

    2017-12-01

    We demonstrate basic operations of a two-component superconducting gravity gradiometer (SGG) that is constructed with a pair of magnetically levitated test masses coupled to superconducting quantum-interference devices. A design that gives a potential sensitivity of 1.4 ×10-4 E Hz-1 /2 (1 E ≡10-9 s-2 ) in the frequency band of 1 to 50 mHz and better than 2 ×10-5 E Hz-1 /2 between 0.1 and 1 mHz for a compact tensor SGG that fits within a 22-cm-diameter sphere. The SGG has the capability of rejecting the platform acceleration and jitter in all 6 degrees of freedom to one part in 109 . Such an instrument has applications in precision tests of fundamental laws of physics, earthquake early warning, and gravity mapping of Earth and the planets.

  15. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  16. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  17. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    The problem of derivation and calculation of sensitivity functions for all parameters of the mass balance reduced model of the COST benchmark activated sludge plant is formulated and solved. The sensitivity functions, equations and augmented sensitivity state space models are derived for the cases of ASM1 and UCT ...

  18. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  19. State of the art in non-animal approaches for skin sensitization testing: from individual test methods towards testing strategies.

    Science.gov (United States)

    Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J

    2016-12-01

    The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an

  20. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  1. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  2. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  3. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  4. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  5. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  6. Beam test of the 2D position sensitive neutron detector

    International Nuclear Information System (INIS)

    Tian Lichao; Chen Yuanbo; Sun Zhijia; Tang Bin; Zhou Jianrong; Qi Huirong; Liu Rongguang; Zhang Jian; Yang Guian; Xu Hong

    2014-01-01

    China Spallation Neutron Source (CSNS), one of the Major scientific apparatuses of the national Eleventh Five-Year Plane, is under construction and three spectrumeters will be constructed in the first phase of the project. A 2D position sensitive neutron detector has been constructed for the Multifunctional Reflect spectrumeter (MR) in Institute of High Energy Physics (IHEP). The basic operation principle of the detector and the test on the residual stress diffractometer of Chinese Advanced Research Reactor (CARR) in China Institute of Atomic Energy (CIAE) is introduced in this paper. The results show that it has a good position resolution of l.18 mm (FWHM) for the neutrons of l.37 A and 2D imaging ability, which is consistent with the theory. It can satisfy the requirements of MR and lays the foundation for the construction of larger neutron detectors. (authors)

  7. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  8. Local defect resonance for sensitive non-destructive testing

    Science.gov (United States)

    Adebahr, W.; Solodov, I.; Rahammer, M.; Gulnizkij, N.; Kreutzbruck, M.

    2016-02-01

    Ultrasonic wave-defect interaction is a background of ultrasound activated techniques for imaging and non-destructive testing (NDT) of materials and industrial components. The interaction, primarily, results in acoustic response of a defect which provides attenuation and scattering of ultrasound used as an indicator of defects in conventional ultrasonic NDT. The derivative ultrasonic-induced effects include e.g. nonlinear, thermal, acousto-optic, etc. responses also applied for NDT and defect imaging. These secondary effects are normally relatively inefficient so that the corresponding NDT techniques require an elevated acoustic power and stand out from conventional ultrasonic NDT counterparts for their specific instrumentation particularly adapted to high-power ultrasonic. In this paper, a consistent way to enhance ultrasonic, optical and thermal defect responses and thus to reduce an ultrasonic power required is suggested by using selective ultrasonic activation of defects based on the concept of local defect resonance (LDR). A strong increase in vibration amplitude at LDR enables to reliably detect and visualize the defect as soon as the driving ultrasonic frequency is matched to the LDR frequency. This also provides a high frequency selectivity of the LDR-based imaging, i.e. an opportunity of detecting a certain defect among a multitude of other defects in material. Some examples are shown how to use LDR in non-destructive testing techniques, like vibrometry, ultrasonic thermography and shearography in order to enhance the sensitivity of defect visualization.

  9. The Hug-up Test: A New, Sensitive Diagnostic Test for Supraspinatus Tears

    Directory of Open Access Journals (Sweden)

    Yu-Lei Liu

    2016-01-01

    Full Text Available Background: The supraspinatus tendon is the most commonly affected tendon in rotator cuff tears. Early detection of a supraspinatus tear using an accurate physical examination is, therefore, important. However, the currently used physical tests for detecting supraspinatus tears are poor diagnostic indicators and involve a wide range of sensitivity and specificity values. Therefore, the aim of this study was to establish a new physical test for the diagnosis of supraspinatus tears and evaluate its accuracy in comparison with conventional tests. Methods: Between November 2012 and January 2014, 200 consecutive patients undergoing shoulder arthroscopy were prospectively evaluated preoperatively. The hug-up test, empty can (EC test, full can (FC test, Neer impingement sign, and Hawkins-Kennedy impingement sign were used and compared statistically for their accuracy in terms of supraspinatus tears, with arthroscopic findings as the gold standard. Muscle strength was precisely quantified using an electronic digital tensiometer. Results: The prevalence of supraspinatus tears was 76.5%. The hug-up test demonstrated the highest sensitivity (94.1%, with a low negative likelihood ratio (NLR, 0.08 and comparable specificity (76.6% compared with the other four tests. The area under the receiver operating characteristic curve for the hug-up test was 0.854, with no statistical difference compared with the EC test (z = 1.438, P = 0.075 or the FC test (z = 1.498, P = 0.067. The hug-up test showed no statistical difference in terms of detecting different tear patterns according to the position (χ2 = 0.578, P = 0.898 and size (Fisher′s exact test, P > 0.999 compared with the arthroscopic examination. The interobserver reproducibility of the hug-up test was high, with a kappa coefficient of 0.823. Conclusions: The hug-up test can accurately detect supraspinatus tears with a high sensitivity, comparable specificity, and low NLR compared with the conventional

  10. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Modelling the pile load test

    Directory of Open Access Journals (Sweden)

    Prekop Ľubomír

    2017-01-01

    Full Text Available This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from experiment.

  12. Modelling the pile load test

    OpenAIRE

    Prekop Ľubomír

    2017-01-01

    This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from exper...

  13. Evaluating sub-national building-energy efficiency policy options under uncertainty: Efficient sensitivity testing of alternative climate, technological, and socioeconomic futures in a regional integrated-assessment model

    International Nuclear Information System (INIS)

    Scott, Michael J.; Daly, Don S.; Zhou, Yuyu; Rice, Jennie S.; Patel, Pralit L.; McJeon, Haewon C.; Page Kyle, G.; Kim, Son H.; Eom, Jiyong

    2014-01-01

    Improving the energy efficiency of building stock, commercial equipment, and household appliances can have a major positive impact on energy use, carbon emissions, and building services. Sub-national regions such as the U.S. states wish to increase energy efficiency, reduce carbon emissions, or adapt to climate change. Evaluating sub-national policies to reduce energy use and emissions is difficult because of the large uncertainties in socioeconomic factors, technology performance and cost, and energy and climate policies. Climate change itself may undercut such policies. However, assessing all of the uncertainties of large-scale energy and climate models by performing thousands of model runs can be a significant modeling effort with its accompanying computational burden. By applying fractional–factorial methods to the GCAM-USA 50-state integrated-assessment model in the context of a particular policy question, this paper demonstrates how a decision-focused sensitivity analysis strategy can greatly reduce computational burden in the presence of uncertainty and reveal the important drivers for decisions and more detailed uncertainty analysis. - Highlights: • We evaluate building energy codes and standards for climate mitigation. • We use an integrated assessment model and fractional factorial methods. • Decision criteria are energy use, CO2 emitted, and building service cost. • We demonstrate sensitivity analysis for three states. • We identify key variables to propagate with Monte Carlo or surrogate models

  14. Noise sensitivity and diminished health: Testing moderators and mediators of the relationship

    Directory of Open Access Journals (Sweden)

    Erin M Hill

    2014-01-01

    Full Text Available The concept of noise sensitivity emerged in public health and psychoacoustic research to help explain individual differences in reactions to noise. Noise sensitivity has been associated with health problems, but the mechanisms underlying this relationship have yet to be fully examined. Participants (n = 1102 were residents of Auckland, New Zealand, who completed questionnaires and returned them through the post. Models of noise sensitivity and health were tested in the analyses using bootstrapping methods to examine indirect effects. Results indicated that gender and noise exposure were not significant moderators in the model. Perceived stress and sleep problems were significant mediators of the relationship between noise sensitivity and subjective health complaints, even after controlling for the influence of neuroticism. However, the relationship between noise sensitivity and mental health complaints (anxiety and depression was accounted for by the variance explained by neuroticism. Overall, this study provides considerable understanding of the relationship between noise sensitivity and health problems and identifies areas for further research in the field.

  15. The Sensitivity of State Differential Game Vessel Traffic Model

    Directory of Open Access Journals (Sweden)

    Lisowski Józef

    2016-04-01

    Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.

  16. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  17. Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling

    National Research Council Canada - National Science Library

    Sakhanenko, Nikita A; Luger, George F

    2008-01-01

    We propose a context-sensitive probabilistic modeling system (COSMOS) that reasons about a complex, dynamic environment through a series of applications of smaller, knowledge-focused models representing contextually relevant information...

  18. Oral sensitization to food proteins: A Brown Norway rat model

    NARCIS (Netherlands)

    Knippels, L.M.J.; Penninks, A.H.; Spanhaak, S.; Houben, G.F.

    1998-01-01

    Background: Although several in vivo antigenicity assays using parenteral immunization are operational, no adequate enteral sensitization models are available to study food allergy and allergenicity of food proteins. Objective: This paper describes the development of an enteral model for food

  19. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  20. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

    OpenAIRE

    Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

    2014-01-01

    The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

  1. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  2. Exploratory rearing: a context- and stress-sensitive behavior recorded in the open-field test.

    Science.gov (United States)

    Sturman, Oliver; Germain, Pierre-Luc; Bohacek, Johannes

    2018-02-16

    Stressful experiences are linked to anxiety disorders in humans. Similar effects are observed in rodent models, where anxiety is often measured in classic conflict tests such as the open-field test. Spontaneous rearing behavior, in which rodents stand on their hind legs to explore, can also be observed in this test yet is often ignored. We define two forms of rearing, supported rearing (in which the animal rears against the walls of the arena) and unsupported rearing (in which the animal rears without contacting the walls of the arena). Using an automated open-field test, we show that both rearing behaviors appear to be strongly context dependent and show clear sex differences, with females rearing less than males. We show that unsupported rearing is sensitive to acute stress, and is reduced under more averse testing conditions. Repeated testing and handling procedures lead to changes in several parameters over varying test sessions, yet unsupported rearing appears to be rather stable within a given animal. Rearing behaviors could therefore provide an additional measure of anxiety in rodents relevant for behavioral studies, as they appear to be highly sensitive to context and may be used in repeated testing designs.

  3. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  4. Highly sensitive multianalyte immunochromatographic test strip for rapid chemiluminescent detection of ractopamine and salbutamol

    International Nuclear Information System (INIS)

    Gao, Hongfei; Han, Jing; Yang, Shijia; Wang, Zhenxing; Wang, Lin; Fu, Zhifeng

    2014-01-01

    Graphical abstract: A multianalyte immunochromatographic test strip was developed for the rapid detection of two β 2 -agonists. Due to the application of chemiluminescent detection, this quantitative method shows much higher sensitivity. - Highlights: • An immunochromatographic test strip was developed for detection of multiple β 2 -agonists. • The whole assay process can be completed within 20 min. • The proposed method shows much higher sensitivity due to the application of CL detection. • It is a portable analytical tool suitable for field analysis and rapid screening. - Abstract: A novel immunochromatographic assay (ICA) was proposed for rapid and multiple assay of β 2 -agonists, by utilizing ractopamine (RAC) and salbutamol (SAL) as the models. Owing to the introduction of chemiluminescent (CL) approach, the proposed protocol shows much higher sensitivity. In this work, the described ICA was based on a competitive format, and horseradish peroxidase-tagged antibodies were used as highly sensitive CL probes. Quantitative analysis of β 2 -agonists was achieved by recording the CL signals of the probes captured on the two test zones of the nitrocellulose membrane. Under the optimum conditions, RAC and SAL could be detected within the linear ranges of 0.50–40 and 0.10–50 ng mL −1 , with the detection limits of 0.20 and 0.040 ng mL −1 (S/N = 3), respectively. The whole process for multianalyte immunoassay of RAC and SAL can be completed within 20 min. Furthermore, the test strip was validated with spiked swine urine samples and the results showed that this method was reliable in measuring β 2 -agonists in swine urine. This CL-based multianalyte test strip shows a series of advantages such as high sensitivity, ideal selectivity, simple manipulation, high assay efficiency and low cost. Thus, it opens up new pathway for rapid screening and field analysis, and shows a promising prospect in food safety

  5. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  6. Sensitivity analysis for near-surface disposal in argillaceous media using NAMMU-HYROCOIN Level 3-Test case 1

    International Nuclear Information System (INIS)

    Miller, D.R.; Paige, R.W.

    1988-07-01

    HYDROCOIN is an international project for comparing groundwater flow models and modelling strategies. Level 3 of the project concerns the application of groundwater flow models to repository performance assessment with emphasis on the treatment of sensitivity and uncertainty in models and data. Level 3, test case 1 concerns sensitivity analysis of the groundwater flow around a radioactive waste repository situated in a near surface argillaceous formation. Work on this test case has been carried out by Harwell and will be reported in full in the near future. This report presents the results obtained using the computer program NAMMU. (author)

  7. Sensitivity of submersed freshwater macrophytes and endpoints in laboratory toxicity tests

    International Nuclear Information System (INIS)

    Arts, Gertie H.P.; Belgers, J. Dick M.; Hoekzema, Conny H.; Thissen, Jac T.N.M.

    2008-01-01

    The toxicological sensitivity and variability of a range of macrophyte endpoints were statistically tested with data from chronic, non-axenic, macrophyte toxicity tests. Five submersed freshwater macrophytes, four pesticides/biocides and 13 endpoints were included in the statistical analyses. Root endpoints, reflecting root growth, were most sensitive in the toxicity tests, while endpoints relating to biomass, growth and shoot length were less sensitive. The endpoints with the lowest coefficients of variation were not necessarily the endpoints, which were toxicologically most sensitive. Differences in sensitivity were in the range of 10-1000 for different macrophyte-specific endpoints. No macrophyte species was consistently the most sensitive. Criteria to select endpoints in macrophyte toxicity tests should include toxicological sensitivity, variance and ecological relevance. Hence, macrophyte toxicity tests should comprise an array of endpoints, including very sensitive endpoints like those relating to root growth. - A range of endpoints is more representative of macrophyte fitness than biomass and growth only

  8. Variation of strain rate sensitivity index of a superplastic aluminum alloy in different testing methods

    Science.gov (United States)

    Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab

    2017-10-01

    The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.

  9. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  10. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  11. NET model coil test possibilities

    International Nuclear Information System (INIS)

    Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.

    1987-11-01

    A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de

  12. Regional climate model sensitivity to domain size

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)

    2009-05-15

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)

  13. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  14. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  15. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  16. Regional climate model sensitivity to domain size

    Science.gov (United States)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  17. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  18. EPA Releases Draft Policy to Reduce Animal Testing for Skin Sensitization

    Science.gov (United States)

    The document, Draft Interim Science Policy: Use of Alternative Approaches for Skin Sensitization as a Replacement for Laboratory Animal Testing, describes the science behind the non-animal alternatives that can now be used to identify skin sensitization.

  19. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  20. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  1. Testing auditory sensitivity in the great cormorant (Phalacrocorax carbo sinensis)

    DEFF Research Database (Denmark)

    Maxwell, Alyssa; Hansen, Kirstin Anderson; Larsen, Ole Næsbye

    2016-01-01

    Psychoacoustic and electrophysiological methods were used to measure the in-air hearing sensitivity of the great cormorant (Phalacrocorax carbo sinensis). One individual was used to determine the behavioral thresholds, which was then compared to previously collected data on the auditory brainstem...

  2. Sensitivity and specificity of copper sulphate test in determining ...

    African Journals Online (AJOL)

    Background: The accuracy of the copper sulphate method for the rapid screening of prospective blood donors has been questioned because this rapid screening method may lead to false deferral of truly eligible prospective blood donors. Objective: This study was aimed at determining the sensitivity and specificity of copper ...

  3. A context-sensitive trust model for online social networking

    CSIR Research Space (South Africa)

    Danny, MN

    2016-11-01

    Full Text Available of privacy attacks. In the quest to address this problem, this paper proposes a context-sensitive trust model. The proposed trust model was designed using fuzzy logic theory and implemented using MATLAB. Contrary to existing trust models, the context...

  4. Collaborative testing of turbulence models

    Science.gov (United States)

    Bradshaw, P.

    1992-12-01

    This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.

  5. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  6. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  7. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  8. Sensitivity, Specificity, and Positivity Predictors of the Pneumococcal Urinary Antigen Test in Community-Acquired Pneumonia.

    Science.gov (United States)

    Molinos, Luis; Zalacain, Rafael; Menéndez, Rosario; Reyes, Soledad; Capelastegui, Alberto; Cillóniz, Catia; Rajas, Olga; Borderías, Luis; Martín-Villasclaras, Juan J; Bello, Salvador; Alfageme, Inmaculada; Rodríguez de Castro, Felipe; Rello, Jordi; Ruiz-Manzano, Juan; Gabarrús, Albert; Musher, Daniel M; Torres, Antoni

    2015-10-01

    Detection of the C-polysaccharide of Streptococcus pneumoniae in urine by an immune-chromatographic test is increasingly used to evaluate patients with community-acquired pneumonia. We assessed the sensitivity and specificity of this test in the largest series of cases to date and used logistic regression models to determine predictors of positivity in patients hospitalized with community-acquired pneumonia. We performed a multicenter, prospective, observational study of 4,374 patients hospitalized with community-acquired pneumonia. The urinary antigen test was done in 3,874 cases. Pneumococcal infection was diagnosed in 916 cases (21%); 653 (71%) of these cases were diagnosed exclusively by the urinary antigen test. Sensitivity and specificity were 60 and 99.7%, respectively. Predictors of urinary antigen positivity were female sex; heart rate≥125 bpm, systolic blood pressureantibiotic treatment; pleuritic chest pain; chills; pleural effusion; and blood urea nitrogen≥30 mg/dl. With at least six of all these predictors present, the probability of positivity was 52%. With only one factor present, the probability was only 12%. The urinary antigen test is a method with good sensitivity and excellent specificity in diagnosing pneumococcal pneumonia, and its use greatly increased the recognition of community-acquired pneumonia due to S. pneumoniae. With a specificity of 99.7%, this test could be used to direct simplified antibiotic therapy, thereby avoiding excess costs and risk for bacterial resistance that result from broad-spectrum antibiotics. We also identified predictors of positivity that could increase suspicion for pneumococcal infection or avoid the unnecessary use of this test.

  9. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    International Nuclear Information System (INIS)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya

    2018-01-01

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  10. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya [KEPCO International Nuclear Graduate School (KINGS), Ulsan (Korea, Republic of). Dept. of NPP Engineering

    2018-02-15

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  11. BIOMOVS test scenario model comparison using BIOPATH

    International Nuclear Information System (INIS)

    Grogan, H.A.; Van Dorp, F.

    1986-07-01

    This report presents the results of the irrigation test scenario, presented in the BIOMOVS intercomparison study, calculated by the computer code BIOPATH. This scenario defines a constant release of Tc-99 and Np-237 into groundwater that is used for irrigation. The system of compartments used to model the biosphere is based upon an area in northern Switzerland and is essentially the same as that used in Projekt Gewaehr to assess the radiological impact of a high level waste repository. Two separate irrigation methods are considered, namely ditch and overhead irrigation. Their influence on the resultant activities calculated in the groundwater, soil and different foodproducts, as a function of time, is evaluated. The sensitivity of the model to parameter variations is analysed which allows a deeper understanding of the model chain. These results are assessed subjectively in a first effort to realistically quantify the uncertainty associated with each calculated activity. (author)

  12. Polarization sensitivity testing of off-plane reflection gratings

    Science.gov (United States)

    Marlowe, Hannah; McEntaffer, Randal L.; DeRoo, Casey T.; Miles, Drew M.; Tutt, James H.; Laubis, Christian; Soltwisch, Victor

    2015-09-01

    Off-Plane reflection gratings were previously predicted to have different efficiencies when the incident light is polarized in the transverse-magnetic (TM) versus transverse-electric (TE) orientations with respect to the grating grooves. However, more recent theoretical calculations which rigorously account for finitely conducting, rather than perfectly conducting, grating materials no longer predict significant polarization sensitivity. We present the first empirical results for radially ruled, laminar groove profile gratings in the off-plane mount which demonstrate no difference in TM versus TE efficiency across our entire 300-1500 eV bandpass. These measurements together with the recent theoretical results confirm that grazing incidence off-plane reflection gratings using real, not perfectly conducting, materials are not polarization sensitive.

  13. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  14. Sensitivity and specificity of the nickel spot (dimethylglyoxime) test

    DEFF Research Database (Denmark)

    Thyssen, Jacob P; Skare, Lizbet; Lundgren, Lennart

    2010-01-01

    The accuracy of the dimethylglyoxime (DMG) nickel spot test has been questioned because of false negative and positive test reactions. The EN 1811, a European standard reference method developed by the European Committee for Standardization (CEN), is fine-tuned to estimate nickel release around...... the limit value of the EU Nickel Directive from products intended to come into direct and prolonged skin contact. Because assessments according to EN 1811 are expensive to perform, time consuming, and may destruct the test item, it should be of great value to know the accuracy of the DMG screening test....

  15. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  16. The identification of model effective dimensions using global sensitivity analysis

    International Nuclear Information System (INIS)

    Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang

    2011-01-01

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  17. The identification of model effective dimensions using global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)

    2011-04-15

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  18. Sensitivity Tests for the Unprotected Events of the Prototype Gen-IV SFR

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chiwoong; Lee, Kwilim; Jeong, Jaeho; Yu, Jin; An, Sangjun; Lee, Seung Won; Chang, Wonpyo; Ha, Kwiseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Unprotected Transient Over Power, (UTOP), Unprotected Loss Of Flow (ULOF), and Unprotected Loss Of Heat Sink (ULOHS) are selected as ATWS events. Among these accidents, the ULOF event shows the lowest clad temperature. However, the ULOHS event showed the highest peak clad temperature, due to the positive CRDL/RV expansion reactivity feedback and insufficient DHRS capacity. In this study, the sensitivity tests are conducted. In the case of the UTOP event, a sensitivity test for the reactivity insertion amount and rate were conducted. This analysis can give a requirement for margin of control rod stop system (CRSS). Currently, the reactivity feedback model for the PGSFR is not validated yet. However, the reactivity feedback models in the MARS-LMR are validating with various plant-based data including EBR-II SHRT. The ATWS events for the PGSFR classified in the design extended condition including UTOP, ULOF, and ULOHS are analyzed with MARS-LMR. In this study, the sensitivity tests for reactivity insertion amount and rate in the UTOP event are conducted. The reactivity insertion amount is obviously an influential parameter. The reactivity insertion amount can give a requirement for design of the CRSS, therefore, this sensitivity result is very important to the CRSS. In addition, sensitivity tests for the weighting factor in the radial expansion reactivity model are carried out. The weighting factor for a grid plate, W{sub GP}, which means contribution of feedback in the grid plate is changed for all unprotected events. The grid plate expansion is governed by a core inlet temperature. As the W{sub GP} is increased, the power in the UTOP and the ULOF is increased, however, the power in the ULOHS is decreased. The higher power during transient means lower reactivity feedback and smaller expansion. Thus, the core outlet temperature rise is dominant in the UTOP and ULOF events, however, the core inlet temperature rise is dominant in the ULOHS. Therefore, the grid plate

  19. Sensitivity and specificity of the nickel spot (dimethylglyoxime) test

    DEFF Research Database (Denmark)

    Thyssen, Jacob P; Skare, Lizbet; Lundgren, Lennart

    2010-01-01

    The accuracy of the dimethylglyoxime (DMG) nickel spot test has been questioned because of false negative and positive test reactions. The EN 1811, a European standard reference method developed by the European Committee for Standardization (CEN), is fine-tuned to estimate nickel release around...

  20. Sensitizing Undergraduates to Potential Inaccuracies in Projective Test Interpretation.

    Science.gov (United States)

    Barret, Robert L.; Wachowiak, Dale G.

    This paper describes a methodology developed to provide undergraduate students with direct experience in the process of impressionistic test interpretation. In the experiential exercise, students were shown Thematic Apperception Test cards and then read the responses given by an anonymous client. A discussion of the process by which the students…

  1. 115 THE SENSITIVITY OF DIAZO TEST IN THE DIAGNOSIS OF ...

    African Journals Online (AJOL)

    Salmonella typhi was the predominant serotype causing typhoid/paratyphoid fevers, followed by S. paratypi A; S. paratyphi C and S. paratyphi B respectively. Although. Diazo test does not appear to be reliable, it could still be useful alongside with Widal agglutination test in endemic rural or urban areas where electricity and ...

  2. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  3. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  4. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  5. Long-term repeatability of the skin prick test is high when supported by history or allergen-sensitivity tests

    DEFF Research Database (Denmark)

    Bødtger, Uffe; Jacobsen, C R; Poulsen, L K

    2003-01-01

    subjects. An SPT was positive when > or =3 mm, and repeatable if either persistently positive or negative. Clinical sensitivity to birch pollen was used as model for inhalation allergy, and was investigated at inclusion and at study termination by challenge tests, intradermal test, titrated SPT and Ig......E measurements. Birch pollen symptoms were confirmed in diaries. RESULTS: The repeatability of a positive SPT was 67%, increasing significantly to 100% when supported by the history. When not supported by history, the presence of specific IgE was significantly associated with a repeatable SPT. Allergen....... CONCLUSION: SPT changes are clinically relevant. Further studies using other allergens are needed. Long-term repeatability of SPT is high in the presence of a supportive history....

  6. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  7. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  8. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... Sensitivity study of reduced models of the activated sludge process, for the purposes of parameter estimation and process optimisation: Benchmark process with ASM1 and UCT reduced biological models. S du Plessis and R Tzoneva*. Department of Electrical Engineering, Cape Peninsula University of ...

  9. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  10. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  11. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  12. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  13. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  14. Method matters: systematic effects of testing procedure on visual working memory sensitivity.

    Science.gov (United States)

    Makovski, Tal; Watson, Leah M; Koutstaal, Wilma; Jiang, Yuhong V

    2010-11-01

    Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This article presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the testing procedure, supporting the idea that representations in visual WM are susceptible to interference from testing. In the study, participants were shown an array of colors to remember. After a short retention interval, memory for one of the items was tested with either a same-different task or a 2-alternative-forced-choice (2AFC) task. Memory sensitivity was much lower in the 2AFC task than in the same-different task. This difference was found regardless of encoding similarity or of whether visual WM required a fine or coarse memory resolution. The 2AFC disadvantage was reduced when participants were informed shortly before testing which item would be probed. The 2AFC disadvantage diminished in perceptual tasks and was not found in tasks probing visual long-term memory. These results support memory models that acknowledge the labile nature of visual WM and have implications for the format of visual WM and its assessment. (c) 2010 APA, all rights reserved

  15. Computer models versus reality: how well do in silico models currently predict the sensitization potential of a substance.

    Science.gov (United States)

    Teubner, Wera; Mehling, Anette; Schuster, Paul Xaver; Guth, Katharina; Worth, Andrew; Burton, Julien; van Ravenzwaay, Bennard; Landsiedel, Robert

    2013-12-01

    National legislations for the assessment of the skin sensitization potential of chemicals are increasingly based on the globally harmonized system (GHS). In this study, experimental data on 55 non-sensitizing and 45 sensitizing chemicals were evaluated according to GHS criteria and used to test the performance of computer (in silico) models for the prediction of skin sensitization. Statistic models (Vega, Case Ultra, TOPKAT), mechanistic models (Toxtree, OECD (Q)SAR toolbox, DEREK) or a hybrid model (TIMES-SS) were evaluated. Between three and nine of the substances evaluated were found in the individual training sets of various models. Mechanism based models performed better than statistical models and gave better predictivities depending on the stringency of the domain definition. Best performance was achieved by TIMES-SS, with a perfect prediction, whereby only 16% of the substances were within its reliability domain. Some models offer modules for potency; however predictions did not correlate well with the GHS sensitization subcategory derived from the experimental data. In conclusion, although mechanistic models can be used to a certain degree under well-defined conditions, at the present, the in silico models are not sufficiently accurate for broad application to predict skin sensitization potentials. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Test model of WWER core

    International Nuclear Information System (INIS)

    Tikhomirov, A. V.; Gorokhov, A. K.

    2007-01-01

    The objective of this paper is creation of precision test model for WWER RP neutron-physics calculations. The model is considered as a tool for verification of deterministic computer codes that enables to reduce conservatism of design calculations and enhance WWER RP competitiveness. Precision calculations were performed using code MCNP5/1/ (Monte Carlo method). Engineering computer package Sapfir 9 5andRC V VER/2/ is used in comparative analysis of the results, it was certified for design calculations of WWER RU neutron-physics characteristic. The object of simulation is the first fuel loading of Volgodon NPP RP. Peculiarities of transition in calculation using MCNP5 from 2D geometry to 3D geometry are shown on the full-scale model. All core components as well as radial and face reflectors, automatic regulation in control and protection system control rod are represented in detail description according to the design. The first stage of application of the model is assessment of accuracy of calculation of the core power. At the second stage control and protection system control rod worth was assessed. Full scale RP representation in calculation using code MCNP5 is time consuming that calls for parallelization of computational problem on multiprocessing computer (Authors)

  17. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  18. Highly sensitive multianalyte immunochromatographic test strip for rapid chemiluminescent detection of ractopamine and salbutamol

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Hongfei; Han, Jing; Yang, Shijia; Wang, Zhenxing; Wang, Lin; Fu, Zhifeng, E-mail: fuzf@swu.edu.cn

    2014-08-11

    Graphical abstract: A multianalyte immunochromatographic test strip was developed for the rapid detection of two β{sub 2}-agonists. Due to the application of chemiluminescent detection, this quantitative method shows much higher sensitivity. - Highlights: • An immunochromatographic test strip was developed for detection of multiple β{sub 2}-agonists. • The whole assay process can be completed within 20 min. • The proposed method shows much higher sensitivity due to the application of CL detection. • It is a portable analytical tool suitable for field analysis and rapid screening. - Abstract: A novel immunochromatographic assay (ICA) was proposed for rapid and multiple assay of β{sub 2}-agonists, by utilizing ractopamine (RAC) and salbutamol (SAL) as the models. Owing to the introduction of chemiluminescent (CL) approach, the proposed protocol shows much higher sensitivity. In this work, the described ICA was based on a competitive format, and horseradish peroxidase-tagged antibodies were used as highly sensitive CL probes. Quantitative analysis of β{sub 2}-agonists was achieved by recording the CL signals of the probes captured on the two test zones of the nitrocellulose membrane. Under the optimum conditions, RAC and SAL could be detected within the linear ranges of 0.50–40 and 0.10–50 ng mL{sup −1}, with the detection limits of 0.20 and 0.040 ng mL{sup −1} (S/N = 3), respectively. The whole process for multianalyte immunoassay of RAC and SAL can be completed within 20 min. Furthermore, the test strip was validated with spiked swine urine samples and the results showed that this method was reliable in measuring β{sub 2}-agonists in swine urine. This CL-based multianalyte test strip shows a series of advantages such as high sensitivity, ideal selectivity, simple manipulation, high assay efficiency and low cost. Thus, it opens up new pathway for rapid screening and field analysis, and shows a promising prospect in food safety.

  19. Design and Validation of a Straight-Copy Typewriting Prognostic Test Using Kinesthetic Sensitivity.

    Science.gov (United States)

    Olson, Norma Jean

    1979-01-01

    Describes the development and application of a kinesthetic sensitivity test to determine whether it is a valid and reliable measure of straight-copy typing speed and accuracy. The author states that this kinesthetic sensitivity instrument may be used as a prognostic aptitude test and recommends administration methods. (MF)

  20. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  1. Radionuclide transit: a sensitive screening test for esophageal dysfunction

    International Nuclear Information System (INIS)

    Russell, C.O.; Hill, L.D.; Holmes, E.R. III; Hull, D.A.; Gannon, R.; Pope, C.E. II.

    1981-01-01

    The purpose of this study was to extend existing nuclear medicine techniques for the diagnosis of esophageal motor disorders. A standard homogeneous bolus of 99mtechnetium sulfur colloid in water was swallowed in the supine position under the collimator of a gamma camera linked to a microprocessor. Bolus transit was recorded at 0.4-s intervals, and the movie obtained was used to analyze transit in an objective manner. Ten normal volunteers and 30 subjects with dysphagia not related to mechanical obstruction were studied with this technique. Radionuclide transit studies detected a higher incidence of esophageal motor abnormality than manometry or radiology in the dysphagia group. In addition a definitive description of the functional problem was possible in most cases. Radionuclide transit is a safe noninvasive test and suitable as a screening test for esophageal motor disorders

  2. Radionuclide transit: a sensitive screening test for esophageal dysfunction

    Energy Technology Data Exchange (ETDEWEB)

    Russell, C.O.; Hill, L.D.; Holmes, E.R. III; Hull, D.A.; Gannon, R.; Pope, C.E. II

    1981-05-01

    The purpose of this study was to extend existing nuclear medicine techniques for the diagnosis of esophageal motor disorders. A standard homogeneous bolus of 99mtechnetium sulfur colloid in water was swallowed in the supine position under the collimator of a gamma camera linked to a microprocessor. Bolus transit was recorded at 0.4-s intervals, and the movie obtained was used to analyze transit in an objective manner. Ten normal volunteers and 30 subjects with dysphagia not related to mechanical obstruction were studied with this technique. Radionuclide transit studies detected a higher incidence of esophageal motor abnormality than manometry or radiology in the dysphagia group. In addition a definitive description of the functional problem was possible in most cases. Radionuclide transit is a safe noninvasive test and suitable as a screening test for esophageal motor disorders.

  3. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  4. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  5. Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    Jianbin Guo

    2014-07-01

    Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.

  6. Model test of boson mappings

    International Nuclear Information System (INIS)

    Navratil, P.; Dobes, J.

    1992-01-01

    Methods of boson mapping are tested in calculations for a simple model system of four protons and four neutrons in single-j distinguishable orbits. Two-body terms in the boson images of the fermion operators are considered. Effects of the seniority v=4 states are thus included. The treatment of unphysical states and the influence of boson space truncation are particularly studied. Both the Dyson boson mapping and the seniority boson mapping as dictated by the similarity transformed Dyson mapping do not seem to be simply amenable to truncation. This situation improves when the one-body form of the seniority image of the quadrupole operator is employed. Truncation of the boson space is addressed by using the effective operator theory with a notable improvement of results

  7. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  8. Sensitivity of wildlife habitat models to uncertainties in GIS data

    Science.gov (United States)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  9. Is Convection Sensitive to Model Vertical Resolution and Why?

    Science.gov (United States)

    Xie, S.; Lin, W.; Zhang, G. J.

    2017-12-01

    Model sensitivity to horizontal resolutions has been studied extensively, whereas model sensitivity to vertical resolution is much less explored. In this study, we use the US Department of Energy (DOE)'s Accelerated Climate Modeling for Energy (ACME) atmosphere model to examine the sensitivity of clouds and precipitation to the increase of vertical resolution of the model. We attempt to understand what results in the behavior change (if any) of convective processes represented by the unified shallow and turbulent scheme named CLUBB (Cloud Layers Unified by Binormals) and the Zhang-McFarlane deep convection scheme in ACME. A short-term hindcast approach is used to isolate parameterization issues from the large-scale circulation. The analysis emphasizes on how the change of vertical resolution could affect precipitation partitioning between convective- and grid-scale as well as the vertical profiles of convection-related quantities such as temperature, humidity, clouds, convective heating and drying, and entrainment and detrainment. The goal is to provide physical insight into potential issues with model convective processes associated with the increase of model vertical resolution. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  11. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  12. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  13. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  14. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    2008-02-01

    Full Text Available Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity.Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity.When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse.Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  15. ABSTRACT: CONTAMINANT TRAVEL TIMES FROM THE NEVADA TEST SITE TO YUCCA MOUNTAIN: SENSITIVITY TO POROSITY

    International Nuclear Information System (INIS)

    Karl F. Pohlmann; Jianting Zhu; Jenny B. Chapman; Charles E. Russell; Rosemary W. H. Carroll; David S. Shafer

    2008-01-01

    Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as a geologic repository for spent nuclear fuel and high-level radioactive waste. In this study, we investigate the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to the YM area by estimating the timeframe for advective travel and its uncertainty resulting from porosity value uncertainty for hydrogeologic units (HGUs) in the region. We perform sensitivity analysis to determine the most influential HGUs on advective radionuclide travel times from the NTS to the YM area. Groundwater pathways and advective travel times are obtained using the particle tracking package MODPATH and flow results from the Death Valley Regional Flow System (DVRFS) model by the U.S. Geological Survey. Values and uncertainties of HGU porosities are quantified through evaluation of existing site porosity data and expert professional judgment and are incorporated through Monte Carlo simulations to estimate mean travel times and uncertainties. We base our simulations on two steady state flow scenarios for the purpose of long term prediction and monitoring. The first represents pre-pumping conditions prior to groundwater development in the area in 1912 (the initial stress period of the DVRFS model). The second simulates 1998 pumping (assuming steady state conditions resulting from pumping in the last stress period of the DVRFS model). Considering underground tests in a clustered region around Pahute Mesa on the NTS as initial particle positions, we track these particles forward using MODPATH to identify hydraulically downgradient groundwater discharge zones and to determine which flowpaths will intercept the YM area. Out of the 71 tests in the saturated zone, flowpaths of 23 intercept the YM area under the pre-pumping scenario. For the 1998 pumping scenario, flowpaths from 55 of the 71 tests intercept the YM area. The results illustrate that mean

  16. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya

    2017-10-03

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  17. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya; Kalligiannaki, Evangelia; Tempone, Raul

    2017-01-01

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  18. Pion interferometric tests of transport models

    Energy Technology Data Exchange (ETDEWEB)

    Padula, S.S.; Gyulassy, M.; Gavin, S. (Lawrence Berkeley Lab., CA (USA). Nuclear Science Div.)

    1990-01-08

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.).

  19. Pion interferometric tests of transport models

    International Nuclear Information System (INIS)

    Padula, S.S.; Gyulassy, M.; Gavin, S.

    1990-01-01

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.)

  20. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    Science.gov (United States)

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  1. A simple in chemico method for testing skin sensitizing potential of chemicals using small endogenous molecules.

    Science.gov (United States)

    Nepal, Mahesh Raj; Shakya, Rajina; Kang, Mi Jeong; Jeong, Tae Cheon

    2018-06-01

    Among many of the validated methods for testing skin sensitization, direct peptide reactivity assay (DPRA) employs no cells or animals. Although no immune cells are involved in this assay, it reliably predicts the skin sensitization potential of a chemical in chemico. Herein, a new method was developed using endogenous small-molecular-weight compounds, cysteamine and glutathione, rather than synthetic peptides, to differentiate skin sensitizers from non-sensitizers with an accuracy as high as DPRA. The percent depletion of cysteamine and glutathione by test chemicals was measured by an HPLC equipped with a PDA detector. To detect small-size molecules, such as cysteamine and glutathione, a derivatization by 4-(4-dimethylaminophenylazo) benzenesulfonyl chloride (DABS-Cl) was employed prior to the HPLC analysis. Following test method optimization, a cut-off criterion of 7.14% depletion was applied to differentiate skin sensitizers from non-sensitizers in combination of the ratio of 1:25 for cysteamine:test chemical with 1:50 for glutathione:test chemical for the best predictivity among various single or combination conditions. Although overlapping HPLC peaks could not be fully resolved for some test chemicals, high levels of sensitivity (100.0%), specificity (81.8%), and accuracy (93.3%) were obtained for 30 chemicals tested, which were comparable or better than those achieved with DPRA. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    Science.gov (United States)

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  3. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    Science.gov (United States)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  4. SENSITIVITY ANALYSIS OF BIOME-BGC MODEL FOR DRY TROPICAL FORESTS OF VINDHYAN HIGHLANDS, INDIA

    OpenAIRE

    M. Kumar; A. S. Raghubanshi

    2012-01-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to...

  5. Stimulus Sensitivity of a Spiking Neural Network Model

    Science.gov (United States)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  6. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  7. Influence of the developer structure on the sensitivity to indication at the penetrant fluid test

    International Nuclear Information System (INIS)

    Riess, N.; Stelling, H.A.

    1982-01-01

    The sensitivity to indication of a penetrant fluid test system depends essentially on the properties of the testing agents used - matched for the object conditions - and on appropriate application. Apart from influences of preliminary cleaning the properties of the testing agent system result from the properties of the individual components, i.e. the penetrant fluid, the intermediate cleaner and the developer, and from the interaction between the individual components. Concerted matching of the individual testing agents is required. Subsequently it is shown by means of theoretical considerations and exemplary experimental results what fundamental interrelations between a penetrant fluid contained in the fault and the developer may be expected. In the theoretical statements findings from a subject field completely alien to nondestructive testing, namely pedology, were applied to the problem at hand. With respect to water economy of the ground model concepts and estimations of the main parameter of influence - e.g. the capillary forces - are available. It was made an attempt to transfer them in modified form to the mechanisms of the developing process. (orig.) [de

  8. Influence of the developer structure on the sensitivity to indication at the penetrant fluid test

    Energy Technology Data Exchange (ETDEWEB)

    Riess, N; Stelling, H A

    1982-04-01

    The sensitivity to indication of a penetrant fluid test system depends essentially on the properties of the testing agents used - matched for the object conditions - and on appropriate application. Apart from influences of preliminary cleaning the properties of the testing agent system result from the properties of the individual components, i.e. the penetrant fluid, the intermediate cleaner and the developer, and from the interaction between the individual components. Concerted matching of the individual testing agents is required. Subsequently it is shown by means of theoretical considerations and exemplary experimental results what fundamental interrelations between a penetrant fluid contained in the fault and the developer may be expected. In the theoretical statements findings from a subject field completely alien to nondestructive testing, namely pedology, were applied to the problem at hand. With respect to water economy of the ground model concepts and estimations of the main parameter of influence - e.g. the capillary forces - are available. An attempt was made to transfer them in modified form to the mechanisms of the developing process.

  9. Anxiety sensitivity predicts increased perceived exertion during a 1-mile walk test among treatment-seeking smokers.

    Science.gov (United States)

    Farris, Samantha G; Uebelacker, Lisa A; Brown, Richard A; Price, Lawrence H; Desaulniers, Julie; Abrantes, Ana M

    2017-12-01

    Smoking increases risk of early morbidity and mortality, and risk is compounded by physical inactivity. Anxiety sensitivity (fear of anxiety-relevant somatic sensations) is a cognitive factor that may amplify the subjective experience of exertion (effort) during exercise, subsequently resulting in lower engagement in physical activity. We examined the effect of anxiety sensitivity on ratings of perceived exertion (RPE) and physiological arousal (heart rate) during a bout of exercise among low-active treatment-seeking smokers. Adult daily smokers (n = 157; M age  = 44.9, SD = 11.13; 69.4% female) completed the Rockport 1.0 mile submaximal treadmill walk test. RPE and heart rate were assessed during the walk test. Multi-level modeling was used to examine the interactive effect of anxiety sensitivity × time on RPE and on heart rate at five time points during the walk test. There were significant linear and cubic time × anxiety sensitivity effects for RPE. High anxiety sensitivity was associated with greater initial increases in RPE during the walk test, with stabilized ratings towards the last 5 min, whereas low anxiety sensitivity was associated with lower initial increase in RPE which stabilized more quickly. The linear time × anxiety sensitivity effect for heart rate was not significant. Anxiety sensitivity is associated with increasing RPE during moderate-intensity exercise. Persistently rising RPE observed for smokers with high anxiety sensitivity may contribute to the negative experience of exercise, resulting in early termination of bouts of prolonged activity and/or decreased likelihood of future engagement in physical activity.

  10. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  11. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  12. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  13. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  14. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  16. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  17. Characterization of strain rate sensitivity and activation volume using the indentation relaxation test

    International Nuclear Information System (INIS)

    Xu Baoxing; Chen Xi; Yue Zhufeng

    2010-01-01

    We present the possibility of extracting the strain rate sensitivity, activation volume and Helmholtz free energy (for dislocation activation) using just one indentation stress relaxation test, and the approach is demonstrated with polycrystalline copper. The Helmholtz free energy measured from indentation relaxation agrees well with that from the conventional compression relaxation test, which validates the proposed approach. From the indentation relaxation test, the measured indentation strain rate sensitivity exponent is found to be slightly larger, and the indentation activation volume much smaller, than their counterparts from the compression test. The results indicate the involvement of multiple dislocation mechanisms in the indentation test.

  18. A Highly Sensitive Rapid Diagnostic Test for Chagas Disease That Utilizes a Recombinant Trypanosoma cruzi Antigen

    Science.gov (United States)

    Barfield, C. A.; Barney, R. S.; Crudder, C. H.; Wilmoth, J. L.; Stevens, D. S.; Mora-Garcia, S.; Yanovsky, M. J.; Weigl, B. H.; Yanovsky, J.

    2011-01-01

    Improved diagnostic tests for Chagas disease are urgently needed. A new lateral flow rapid test for Chagas disease is under development at PATH, in collaboration with Laboratorio Lemos of Argentina, which utilizes a recombinant antigen for detection of antibodies to Trypanosoma cruzi. To evaluate the performance of this test, 375 earlier characterized serum specimens from a region where Chagas is endemic were tested using a reference test (the Ortho T. cruzi ELISA, Johnson & Johnson), a commercially available rapid test (Chagas STAT-PAK, Chembio), and the PATH–Lemos rapid test. Compared to the composite reference tests, the PATH–Lemos rapid test demonstrated an optimal sensitivity of 99.5% and specificity of 96.8%, while the Chagas STAT-PAK demonstrated a sensitivity of 95.3% and specificity of 99.5%. These results indicate that the PATH–Lemos rapid test shows promise as an improved and reliable tool for screening and diagnosis of Chagas disease. PMID:21342808

  19. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  20. A Measure of Cultural Competence as an Ethical Responsibility: Quick-Racial and Ethical Sensitivity Test

    Science.gov (United States)

    Sirin, Selcuk R.; Rogers-Sirin, Lauren; Collins, Brian A.

    2010-01-01

    This article presents the psychometric qualifications of a new video-based measure of school professionals' ethical sensitivity toward issues of racial intolerance in schools. The new scale, titled the Quick-Racial and Ethical Sensitivity Test (Quick-REST) is based on the ethical principles commonly shared by school-based professional…

  1. The sensitivity testing of Wilms' tumors to cytostatic agents with an autoradiographic in vitro short-term test

    International Nuclear Information System (INIS)

    Willnow, U.

    1984-01-01

    Sensitivity of 15 Wilms' tumors in children was tested towards cytostatic agents in vitro by means of an autoradiographic short-term test. Sensitivity was measured as the magnitude of the inhibition of 3 H-thymidine or 3 H-uridine incorporation. The test was performed with Adriamycin, Actinomycin D, Daunomycin, Bleomycin, Cyclophosphamide, Ifosfamide, Trenimon, and Arabinosylcytosine. None of the tumors is resistant to all substances, they are responsive against 2 or more drugs. The most effective drugs tested are Adriamycin, Actinomycin D and Cyclophosphamide. The tumors show a marked individual sensitivity pattern. This behavior is explained mainly by the usually high proliferative activity of Wilms' tumors. The possibilities and limits of long-term and short-term methods for sensitivity testing are discussed critically. For the evaluation of the results of in vitro testing and in vivo effectiveness the close correlation should be considered between the type of cytostatic agent and proliferation kinetics of the tumor, cytostatic agent and effect on tumor metabolism as well as the effect of the cytostatics and the nucleic acid precursors used for the short-term test. Despite the methodological limitations preclinical testing should be preferred to unselected chemotherapy. (author)

  2. A Comparison of Procedures for Content-Sensitive Item Selection in Computerized Adaptive Tests.

    Science.gov (United States)

    Kingsbury, G. Gage; Zara, Anthony R.

    1991-01-01

    This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)

  3. Improved sensitivity testing of explosives using transformed Up-Down methods

    International Nuclear Information System (INIS)

    Brown, Geoffrey W

    2014-01-01

    Sensitivity tests provide data that help establish guidelines for the safe handling of explosives. Any sensitivity test is based on assumptions to simplify the method or reduce the number of individual sample evaluations. Two common assumptions that are not typically checked after testing are 1) explosive response follows a normal distribution as a function of the applied stimulus levels and 2) the chosen test level spacing is close to the standard deviation of the explosive response function (for Bruceton Up-Down testing for example). These assumptions and other limitations of traditional explosive sensitivity testing can be addressed using Transformed Up-Down (TUD) test methods. TUD methods have been developed extensively for psychometric testing over the past 50 years and generally use multiple tests at a given level to determine how to adjust the applied stimulus. In the context of explosive sensitivity we can use TUD methods that concentrate testing around useful probability levels. Here, these methods are explained and compared to Bruceton Up-Down testing using computer simulation. The results show that the TUD methods are more useful for many cases but that they do require more tests as a consequence. For non-normal distributions, however, the TUD methods may be the only accurate assessment method.

  4. Repeated patch testing to nickel during childhood do not induce nickel sensitization

    DEFF Research Database (Denmark)

    Søgaard Christiansen, Elisabeth

    2014-01-01

    Background: Previously, patch test reactivity to nickel sulphate in a cohort of unselected infants tested repeatedly at 3-72 months of age has been reported. A reproducible positive reaction at 12 and 18 months was selected as a sign of nickel sensitivity, provided a patch test with an empty Finn...

  5. Sensitivity of the improved Dutch tube diffusion test for detection of ...

    African Journals Online (AJOL)

    The sensitivity of the improved two-tube test for detection of antimicrobial residues in Kenyan milk was investigated by comparison with the commercial Delvo test SP. Suspect positive milk samples (n =244) from five milk collection centers, were analyzed with the improved two-tube and the commercial Delvo SP test as per ...

  6. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  7. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  8. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  9. Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1

    Science.gov (United States)

    Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong

    2017-05-01

    Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.

  10. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  11. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  12. Sensitive KIT D816V mutation analysis of blood as a diagnostic test in mastocytosis

    DEFF Research Database (Denmark)

    Kielsgaard Kristensen, Thomas; Vestergaard, Hanne; Bindslev-Jensen, Carsten

    2014-01-01

    The recent progress in sensitive KIT D816V mutation analysis suggests that mutation analysis of peripheral blood (PB) represents a promising diagnostic test in mastocytosis. However, there is a need for systematic assessment of the analytical sensitivity and specificity of the approach in order...... to establish its value in clinical use. We therefore evaluated sensitive KIT D816V mutation analysis of PB as a diagnostic test in an entire case-series of adults with mastocytosis. We demonstrate for the first time that by using a sufficiently sensitive KIT D816V mutation analysis, it is possible to detect...... the mutation in PB in nearly all adult mastocytosis patients. The mutation was detected in PB in 78 of 83 systemic mastocytosis (94%) and 3 of 4 cutaneous mastocytosis patients (75%). The test was 100% specific as determined by analysis of clinically relevant control patients who all tested negative. Mutation...

  13. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  14. 46 CFR 154.431 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.431 Section 154.431 Shipping COAST GUARD... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank...

  15. 46 CFR 154.449 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.449 Section 154.449 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF... § 154.449 Model test. The following analyzed data of a model test of structural elements for independent...

  16. Increased Sensitization to Mold Allergens Measured by Intradermal Skin Testing following Hurricanes.

    Science.gov (United States)

    Saporta, Diego; Hurst, David

    2017-01-01

    Objective . To report on changes in sensitivity to mold allergens determined by changes in intradermal skin testing reactivity, after exposure to two severe hurricanes. Methods . A random, retrospective allergy charts review divided into 2 groups of 100 patients each: Group A, patients tested between 2003 and 2010 prior to hurricanes, and Group B, patients tested in 2014 and 2015 following hurricanes. Reactivity to eighteen molds was determined by intradermal skin testing. Test results, age, and respiratory symptoms were recorded. Chi-square test determined reactivity/sensitivity differences between groups. Results . Posthurricane patients had 34.6 times more positive results ( p hurricanes ( p hurricanes ( p hurricanes. This supports climatologists' hypothesis that environmental changes resulting from hurricanes can be a health risk as reflected in increased allergic sensitivities and symptoms and has significant implications for physicians treating patients from affected areas.

  17. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  18. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  19. Integrating non-animal test information into an adaptive testing strategy - skin sensitization proof of concept case.

    Science.gov (United States)

    Jaworska, Joanna; Harol, Artsiom; Kern, Petra S; Gerberick, G Frank

    2011-01-01

    There is an urgent need to develop data integration and testing strategy frameworks allowing interpretation of results from animal alternative test batteries. To this end, we developed a Bayesian Network Integrated Testing Strategy (BN ITS) with the goal to estimate skin sensitization hazard as a test case of previously developed concepts (Jaworska et al., 2010). The BN ITS combines in silico, in chemico, and in vitro data related to skin penetration, peptide reactivity, and dendritic cell activation, and guides testing strategy by Value of Information (VoI). The approach offers novel insights into testing strategies: there is no one best testing strategy, but the optimal sequence of tests depends on information at hand, and is chemical-specific. Thus, a single generic set of tests as a replacement strategy is unlikely to be most effective. BN ITS offers the possibility of evaluating the impact of generating additional data on the target information uncertainty reduction before testing is commenced.

  20. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  1. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension

  2. A novel hypothesis on the sensitivity of the fecal occult blood test: Results of a joint analysis of 3 randomized controlled trials.

    Science.gov (United States)

    Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F

    2009-06-01

    Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.

  3. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    Science.gov (United States)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  4. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer Hougaard

    2012-01-01

    Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...... the underlying neural encoding of an experiment defining multiple brain states. In this relation there is a great desire for the researcher to generate brain maps, that highlight brain locations of importance to the classifiers decisions. Based on sensitivity analysis, we develop further procedures for model...... direction the individual locations influence the classification. We illustrate the visualization procedure on a real data from a simple functional magnetic resonance imaging experiment....

  5. 12th Rencontres du Vietnam : High Sensitivity Experiments Beyond the Standard Model

    CERN Document Server

    2016-01-01

    The goal of this workshop is to gather researchers, theoreticians, experimentalists and young scientists searching for physics beyond the Standard Model of particle physics using high sensitivity experiments. The standard model has been very successful in describing the particle physics world; the Higgs-Englert-Brout boson discovery is its last major discovery. Complementary to the high energy frontier explored at colliders, real opportunities for discovery exist at the precision frontier, testing fundamental symmetries and tracking small SM deviations.

  6. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  7. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    Science.gov (United States)

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  8. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  9. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  10. A reactive transport model for mercury fate in contaminated soil--sensitivity analysis.

    Science.gov (United States)

    Leterme, Bertrand; Jacques, Diederik

    2015-11-01

    We present a sensitivity analysis of a reactive transport model of mercury (Hg) fate in contaminated soil systems. The one-dimensional model, presented in Leterme et al. (2014), couples water flow in variably saturated conditions with Hg physico-chemical reactions. The sensitivity of Hg leaching and volatilisation to parameter uncertainty is examined using the elementary effect method. A test case is built using a hypothetical 1-m depth sandy soil and a 50-year time series of daily precipitation and evapotranspiration. Hg anthropogenic contamination is simulated in the topsoil by separately considering three different sources: cinnabar, non-aqueous phase liquid and aqueous mercuric chloride. The model sensitivity to a set of 13 input parameters is assessed, using three different model outputs (volatilized Hg, leached Hg, Hg still present in the contaminated soil horizon). Results show that dissolved organic matter (DOM) concentration in soil solution and the binding constant to DOM thiol groups are critical parameters, as well as parameters related to Hg sorption to humic and fulvic acids in solid organic matter. Initial Hg concentration is also identified as a sensitive parameter. The sensitivity analysis also brings out non-monotonic model behaviour for certain parameters.

  11. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    International Nuclear Information System (INIS)

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  12. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  13. Parametric Sensitivity Tests—European Polymer Electrolyte Membrane Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature......As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests......, pressure, and stoichiometry at varying current density. Furthermore, procedures for polarization curve recording were also tested both in ascending and descending current directions....

  14. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  15. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  16. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  17. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  18. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  19. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  20. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  1. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  2. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  3. The Model Identification Test: A Limited Verbal Science Test

    Science.gov (United States)

    McIntyre, P. J.

    1972-01-01

    Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)

  4. Theoretical Models, Assessment Frameworks and Test Construction.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  5. Healthy volunteers can be phenotyped using cutaneous sensitization pain models

    DEFF Research Database (Denmark)

    Werner, Mads U; Petersen, Karin; Rowbotham, Michael C

    2013-01-01

    Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repe...... repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models.......Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following...

  6. Geochemical Testing And Model Development - Residual Tank Waste Test Plan

    International Nuclear Information System (INIS)

    Cantrell, K.J.; Connelly, M.P.

    2010-01-01

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  7. The lymphocyte transformation test for the diagnosis of drug allergy: sensitivity and specificity.

    Science.gov (United States)

    Nyfeler, B; Pichler, W J

    1997-02-01

    The diagnosis of a drug allergy is mainly based upon a very detailed history and the clinical findings. In addition, several in vitro or in vivo tests can be performed to demonstrate a sensitization to a certain drug. One of the in vitro tests is the lymphocyte transformation test (LTT), which can reveal a sensitization of T-cells by an enhanced proliferative response of peripheral blood mononuclear cells to a certain drug. To evaluate the sensitivity and specificity of the LTT, 923 case histories of patients with suspected drug allergy in whom a LTT was performed were retrospectively analysed. Based on the history and provocation tests, the probability (P) of a drug allergy was estimated to be > 0.9, 0.5-0.9, 0.1-0.5 or 0.9) had a positive LTT, which indicates a sensitivity of 78%. If allergies to betalactam-antibiotics were analysed separately, the sensitivity was 74.4%. Fifteen of 102 patients where a classical drug allergy could be excluded (P sensitization could be demonstrated as well (i.e. hen's egg lysozyme, 7/7). In 632 of the 923 cases, skin tests were also performed (scratch and/or epicutaneous), for which we found a lower sensitivity than for the LTT (64%), while the specificity was the same (85%). Although our data are somewhat biased by the high number of penicillin allergies and cannot be generalized to drug allergies caused by other compounds, we conclude that the LTT is a useful diagnostic test in drug allergies, able to support the diagnosis of a drug allergy and to pinpoint the relevant drug.

  8. Physiological assessment of sensitivity of noninvasive testing for coronary artery disease

    International Nuclear Information System (INIS)

    Simonetti, I.; Rezai, K.; Rossen, J.D.; Winniford, M.D.; Talman, C.L.; Hollenberg, M.; Kirchner, P.T.; Marcus, M.L.

    1991-01-01

    The sensitivity of three noninvasive tests for coronary artery disease was assessed by means of quantitative indexes of disease severity in three different groups of patients. The overall population consisted of 110 subjects with limited coronary artery disease and no myocardial infarction. Planar dipyridamole- 201 Tl scintigraphy was evaluated in 31 patients, computer-assisted exercise treadmill in 28, and high-dose dipyridamole echocardiography testing in 51. Sensitivity was assessed by rigorous gold standards to define disease severity, such as measurement of minimum cross-sectional area and percent area of stenosis, by quantitative computerized coronary angiography (Brown/Dodge method). On the basis of the results of previous studies, the presence of physiologically significant coronary artery disease was indicated by a stenotic minimum cross-sectional area (MCSA) of less than 2.0 mm 2 or a greater than 75% area of stenosis. With MCSA as the gold standard, dipyridamole- 201 Tl scintigraphy, computerized exercise treadmill, and dipyridamole echocardiography testing showed sensitivities of 52%, 54%, and 61%, respectively, in the three different patient cohorts enrolled. With percent area of stenosis as the gold standard, the sensitivity figures obtained for dipyridamole- 201 Tl, computerized exercise treadmill, and dipyridamole echocardiography testing were 64%, 54%, and 69%, respectively. For each of the three tests, sensitivity increased with increasing lesion severity. Sensitivity was also better in patients with left anterior descending coronary (LAD) disease when compared with patients with left circumflex or right coronary artery disease. Results of these studies demonstrate that in patients with limited coronary artery disease none of the tests evaluated is definitely superior in sensitivity

  9. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    Science.gov (United States)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more

  10. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  11. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  12. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  13. Sensitivity and specificity of the 3-item memory test in the assessment of post traumatic amnesia.

    Science.gov (United States)

    Andriessen, Teuntje M J C; de Jong, Ben; Jacobs, Bram; van der Werf, Sieberen P; Vos, Pieter E

    2009-04-01

    To investigate how the type of stimulus (pictures or words) and the method of reproduction (free recall or recognition after a short or a long delay) affect the sensitivity and specificity of a 3-item memory test in the assessment of post traumatic amnesia (PTA). Daily testing was performed in 64 consecutively admitted traumatic brain injured patients, 22 orthopedically injured patients and 26 healthy controls until criteria for resolution of PTA were reached. Subjects were randomly assigned to a test with visual or verbal stimuli. Short delay reproduction was tested after an interval of 3-5 minutes, long delay reproduction was tested after 24 hours. Sensitivity and specificity were calculated over the first 4 test days. The 3-word test showed higher sensitivity than the 3-picture test, while specificity of the two tests was equally high. Free recall was a more effortful task than recognition for both patients and controls. In patients, a longer delay between registration and recall resulted in a significant decrease in the number of items reproduced. Presence of PTA is best assessed with a memory test that incorporates the free recall of words after a long delay.

  14. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  15. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  16. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  17. Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions

    Directory of Open Access Journals (Sweden)

    Sebastian Bab

    2012-04-01

    Full Text Available The concept of dynamic coalitions (also virtual organizations describes the temporary interconnection of autonomous agents, who share information or resources in order to achieve a common goal. Through modern technologies these coalitions may form across company, organization and system borders. Therefor questions of access control and security are of vital significance for the architectures supporting these coalitions. In this paper, we present our first steps to reach a formal framework for modeling and verifying the design of privacy-sensitive dynamic coalition infrastructures and their processes. In order to do so we extend existing dynamic coalition modeling approaches with an access-control-concept, which manages access to information through policies. Furthermore we regard the processes underlying these coalitions and present first works in formalizing these processes. As a result of the present paper we illustrate the usefulness of the Abstract State Machine (ASM method for this task. We demonstrate a formal treatment of privacy-sensitive dynamic coalitions by two example ASMs which model certain access control situations. A logical consideration of these ASMs can lead to a better understanding and a verification of the ASMs according to the aspired specification.

  18. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  19. Comparison of the sensitivity of typhi dot test with blood culture in typhoid

    Energy Technology Data Exchange (ETDEWEB)

    Rizvi, Q [Hamdard College of Medicine, Karachi (Pakistan). Dept. of Pharmacology

    2006-10-15

    To evaluate the sensitivity of Typhi Dot test in comparison to Blood Culture for the diagnosis of Typhoid Fever in our setup. Fifty patients who fulfilled the clinical criteria of having Typhoid Fever. The data of all the patients was documented, and they were submitted to the Typhi Dot and Blood Culture tests, apart from other routine investigations. Out of the total 50 patients, 47(94%) had their Blood Culture positive for Typhoid bacillus, while in 49 (98%) the Typhi Dot test was positive. Two patients which were found positive on Typhi dot test, gave negative results on Blood Culture. One patient with the signs and symptoms of Typhoid Fever was found neither positive on Typhi Dot test nor upon Blood Culture. There was no significant difference between the results of Blood Culture and Typhi Dot test in the diagnosis of Typhoid Fever. However, Typhi Dot has the advantages of being less expensive and quicker in giving results with excellent sensitivity. (author)

  20. Comparison of the sensitivity of typhi dot test with blood culture in typhoid

    International Nuclear Information System (INIS)

    Rizvi, Q.

    2006-01-01

    To evaluate the sensitivity of Typhi Dot test in comparison to Blood Culture for the diagnosis of Typhoid Fever in our setup. Fifty patients who fulfilled the clinical criteria of having Typhoid Fever. The data of all the patients was documented, and they were submitted to the Typhi Dot and Blood Culture tests, apart from other routine investigations. Out of the total 50 patients, 47(94%) had their Blood Culture positive for Typhoid bacillus, while in 49 (98%) the Typhi Dot test was positive. Two patients which were found positive on Typhi dot test, gave negative results on Blood Culture. One patient with the signs and symptoms of Typhoid Fever was found neither positive on Typhi Dot test nor upon Blood Culture. There was no significant difference between the results of Blood Culture and Typhi Dot test in the diagnosis of Typhoid Fever. However, Typhi Dot has the advantages of being less expensive and quicker in giving results with excellent sensitivity. (author)

  1. Speech-in-noise screening tests by internet, part 3: test sensitivity for uncontrolled parameters in domestic usage

    NARCIS (Netherlands)

    Leensen, Monique C. J.; Dreschler, Wouter A.

    2013-01-01

    The online speech-in-noise test 'Earcheck' is sensitive for noise-induced hearing loss (NIHL). This study investigates effects of uncontrollable parameters in domestic self-screening, such as presentation level and transducer type, on speech reception thresholds (SRTs) obtained with Earcheck.

  2. Comparison of sensitivity of quantiferon-tb gold test and tuberculin skin test in active pulmonary tuberculosis

    International Nuclear Information System (INIS)

    Khalil, K.F.; Ambreen, A.; Butt, T.

    2013-01-01

    Objective: To compare the sensitivity of tuberculin skin test (TST) and quantiFERON-TB gold test (QFT-G) in active pulmonary tuberculosis. Study Design: Analytical study. Place and Duration of Study: Department of Pulmonology, Fauji Foundation Hospital, Rawalpindi, from July 2011 to January 2012. Methodology: QuantiFERON-TB gold test (QFT-G) was evaluated and compared it with tuberculin skin test (TST) in 50 cases of active pulmonary tuberculosis, in whom tuberculous infection was suspected on clinical, radiological and microbiological grounds. Sensitivity was determined against positive growth for Mycobacterium tuberculosis. Results: Out of 50 cases, 43 were females and 7 were males. The mean age was 41.84 A+- 19.03 years. Sensitivity of QFT-G was 80% while that of TST was 28%. Conclusion: QFT-G has much higher sensitivity than TST for active pulmonary tuberculosis. It is unaffected by prior BCG administration and prior exposure to atypical mycobacteria. A positive QFT-G result can be an adjunct to diagnosis in patients having clinical and radiological data compatible with pulmonary tuberculosis. (author)

  3. Short ensembles: An Efficient Method for Discerning Climate-relevant Sensitivities in Atmospheric General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun

    2014-09-08

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.

  4. Assessing contaminant sensitivity of endangered and threatened aquatic species: Part III. Effluent toxicity tests

    Science.gov (United States)

    Dwyer, F.J.; Hardesty, D.K.; Henke, C.E.; Ingersoll, C.G.; Whites, D.W.; Augspurger, T.; Canfield, T.J.; Mount, D.R.; Mayer, F.L.

    2005-01-01

    Toxicity tests using standard effluent test procedures described by the U.S. Environmental Protection Agency were conducted with Ceriodaphnia dubia, fathead minnows (Pimephales promelas), and seven threatened and endangered (listed) fish species from four families: (1) Acipenseridae: shortnose sturgeon (Acipenser brevirostrum); (2) Catostomidae; razorback sucker (Xyrauchen texanus); (3) Cyprinidae: bonytail chub (Gila elegans), Cape Fear shiner (Notropis mekistocholas) Colorado pikeminnow (Ptychocheilus lucius), and spotfin chub (Cyprinella monacha); and (4) Poecillidae: Gila topminnow (Poeciliopsis occidentalis). We conducted 7-day survival and growth studies with embryo-larval fathead minnows and analogous exposures using the listed species. Survival and reproduction were also determined with C. dubia. Tests were conducted with carbaryl, ammonia-or a simulated effluent complex mixture of carbaryl, copper, 4-nonylphenol, pentachlorophenol and permethrin at equitoxic proportions. In addition, Cape Fear shiners and spotfin chub were tested using diazinon, copper, and chlorine. Toxicity tests were also conducted with field-collected effluents from domestic or industrial facilities. Bonytail chub and razorback suckers were tested with effluents collected in Arizona whereas effluent samples collected from North Carolina were tested with Cape Fear shiner, spotfin chub, and shortnose sturgeon. The fathead minnow 7-day effluent test was often a reliable estimator of toxic effects to the listed fishes. However, in 21 % of the tests, a listed species was more sensitive than fathead minnows. More sensitive species results varied by test so that usually no species was always more or less sensitive than fathead minnows. Only the Gila topminnow was consistently less sensitive than the fathead minnow. Listed fish species were protected 96% of the time when results for both fathead minnows and C. dubia were considered, thus reinforcing the value of standard whole

  5. An integrated electrochemical device based on immunochromatographic test strip and enzyme labels for sensitive detection of disease-related biomarkers

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Zhexiang; Wang, Jun; Wang, Hua; Li, Yao Q.; Lin, Yuehe

    2012-05-30

    A novel electrochemical biosensing device that integrates an immunochromatographic test strip and a screen-printed electrode (SPE) connected to a portable electrochemical analyzer was presented for rapid, sensitive, and quantitative detection of disease-related biomarker in human blood samples. The principle of the sensor is based on sandwich immunoreactions between a biomarker and a pair of its antibodies on the test strip, followed by highly sensitive square-wave voltammetry (SWV) detection. Horseradish peroxidase (HRP) was used as a signal reporter for electrochemical readout. Hepatitis B surface antigen (HBsAg) was employed as a model protein biomarker to demonstrate the analytical performance of the sensor in this study. Some critical parameters governing the performance of the sensor were investigated in detail. The sensor was further utilized to detect HBsAg in human plasma with an average recovery of 91.3%. In comparison, a colorimetric immunochromatographic test strip assay (ITSA) was also conducted. The result shows that the SWV detection in the electrochemical sensor is much more sensitive for the quantitative determination of HBsAg than the colorimetric detection, indicating that such a sensor is a promising platform for rapid and sensitive point-of-care testing/screening of disease-related biomarkers in a large population

  6. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  7. On sensitivity of gamma families to the model of nuclear interaction

    International Nuclear Information System (INIS)

    Krys, A.; Tomaszewski, A.; Wrotniak, J.A.

    1980-01-01

    A variety of 5 different models of nuclear interaction has been used in a Monte Carlo simulation of nuclear and electromagnetic showers in the atmosphere. The gamma families obtained from this simulation were processed in a way, analogous to one employed in analysis of Pamir experimental results. The sensitivity of observed pattern to the nuclear interaction model assumptions was investigated. Such sensitivity, though not a strong one, was found. In case of longitudinal (or energetical) family characteristics, the changes in nuclear interaction should be really large, if they were to be reflected in the experimental data -with all methodical error possibilities. The transverse characteristics of gamma families are more sensitive to the assumed transverse momentum distribution, but they feel the longitudinal features of nuclear interaction as well. Additionally, there was tested the dependence of observed family pattern on some methodical effects (resolving power of X-ray film, radial cut-off and energy underestimation.) (author)

  8. Evaluation of sensitivity and specificity of bone marrow trephine biopsy tests in an Indian teaching hospital

    Directory of Open Access Journals (Sweden)

    Sima Chauhan

    2018-06-01

    Full Text Available Introduction: Bone marrow aspiration (BMA and bone marrow biopsy (BMB is an indispensable diagnostic tool for evaluating haematological and non-haematological disorders and patient follow-up in present era. We have compared the advantages of trephine biopsy over bone marrow aspiration in these patients. Aim and objective: To evaluate sensitivity and specificity of trephine biopsy test for haematological and non haematological disorder patients in comparison to bone marrow aspiration test. Materials and method: In this 1 year prospective study (June 2014–May 2015, we evaluated the haematological and non-haematological disorder patients by BMA and BMB (aided with I.H.C. when ever needed. The sensitivity and specificity of the tests were calculated. Results: Among, final 504 hemotological/non haematological disorder patients, 416 cases were diagnosed (+ve in BMA test, where as it was 494 in BMB test and with chi2 test it was highly significant as p = 0.0001. It was concluded that True positive cases were 416, True negative were 9 cases, false negative 78 cases and false positive was in one case only. The sensitivity and specificity of bone marrow trephine biopsy test was 84% and 90% respectively. Conclusion: BMB (aided with I.H.C is a gold standard test for detecting different haematological and non hamatological disorders. In our study the sensitivity and specificity of BMB test was 84% and 90% respectively. When performed in association with BMA in the same sitting, significantly augments the chances of reaching a correct diagnosis. Keywords: Bone marrow trephine biopsy, Bone marrow aspiration, Sensitivity, Specificity

  9. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  10. Models for patients' recruitment in clinical trials and sensitivity analysis.

    Science.gov (United States)

    Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas

    2012-07-20

    Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.

  11. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  12. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  13. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  14. Screening Test for Detection of Leptinotarsa decemlineata (Say Sensitivity to Insecticides

    Directory of Open Access Journals (Sweden)

    Dušanka Inđić

    2012-01-01

    Full Text Available In 2009, the sensitivity of 15 field populations of Colorado potato beetle (Leptinotarsadecemlineata Say. - CPB was assessed to chlorpyrifos, cypermethrin, thiamethoxam and fipronil,four insecticides which are mostly used for its control in Serbia. Screening test that allows rapidassessment of sensitivity of overwintered adults to insecticides was performed. Insecticideswere applied at label rates, and two, five and 10 fold higher rates by soaking method (5 sec.Mortality was assessed after 72h. From 15 monitored populations of CPB, two were sensitiveto label rate of chlorpyrifos, one was slightly resistant, 11 were resistant and one populationwas highly resistant. Concerning cypermethrin, two populations were sensitive, two slightlyresistant, five were resistant and six highly resistant. Highly sensitive to thiamethoxam labelrate were 12 populations, while three were sensitive. In the case of fipronil applied at label rate,two populations were highly sensitive, six sensitive, one slightly resistant and six were resistant.The application of insecticides at higher rates (2, 5 and 10 fold, that is justified only in bioassays,provided a rapid insight into sensitivity of field populations of CPB to insecticides.

  15. Increased retest reactivity by both patch and use test with methyldibromoglutaronitrile in sensitized individuals

    DEFF Research Database (Denmark)

    Jensen, Charlotte D; Johansen, Jeanne Duus; Menné, Torkil

    2006-01-01

    -exposure by both a patch test challenge and a use test with a liquid soap preserved with MDBGN. MDBGN dermatitis was elicited on the back and arms of sensitized individuals. One month later the previously eczematous areas were challenged with MDBGN. On the back, the test sites were patch-tested with a serial...... dilution of MDBGN and a use test was performed on the arms with an MDBGN-containing soap. A statistically significant increased response was seen on the areas with previous dermatitis on the back. Eight of the nine patients who developed dermatitis on the arms from the MDBGN-containing soap had...

  16. Engineering Sensitivity Improvement of Helium Mass Spectrometer Leak Detection System by Means Global Hard Vacuum Test

    International Nuclear Information System (INIS)

    Sigit Asmara Santa

    2006-01-01

    The engineering sensitivity improvement of Helium mass spectrometer leak detection using global hard vacuum test configuration has been done. The purpose of this work is to enhance the sensitivity of the current leak detection of pressurized method (sniffer method) with the sensitivity of 10 -3 ∼ 10 -5 std cm 3 /s, to the global hard vacuum test configuration method which can be achieved of up to 10 -8 std cm 3 /s. The goal of this research and development is to obtain a Helium leak test configuration which is suitable and can be used as routine bases in the quality control tests of FPM capsule and AgInCd safety control rod products. The result is an additional instrumented vacuum tube connected with conventional Helium mass spectrometer. The pressure and temperature of the test object during the leak measurement are simulated by means of a 4.1 kW capacity heater and Helium injection to test object, respectively. The addition of auxiliary mechanical vacuum pump of 2.4 l/s pumping speed which is directly connected to the vacuum tube, will reduce 86 % of evacuation time. The reduction of the measured sensitivity due to the auxiliary mechanical vacuum pump can be overcome by shutting off the pump soon after Helium mass spectrometer reaches its operating pressure condition. (author)

  17. Particle transport model sensitivity on wave-induced processes

    Science.gov (United States)

    Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna

    2017-04-01

    Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.

  18. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    2001-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L' Air Liquide in the form of a NET contract. (author)

  19. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    1999-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L'Air Liquide in the form of a NET contract. (author)

  20. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  1. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  2. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  3. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  4. Uncertainty and sensitivity analysis of environmental transport models

    International Nuclear Information System (INIS)

    Margulies, T.S.; Lancaster, L.E.

    1985-01-01

    An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables

  5. Sensitivity of tropospheric heating rates to aerosols: A modeling study

    International Nuclear Information System (INIS)

    Hanna, A.F.; Shankar, U.; Mathur, R.

    1994-01-01

    The effect of aerosols on the radiation balance is critical to the energetics of the atmosphere. Because of the relatively long residence of specific types of aerosols in the atmosphere and their complex thermal and chemical interactions, understanding their behavior is crucial for understanding global climate change. The authors used the Regional Particulate Model (RPM) to simulate aerosols in the eastern United States in order to identify the aerosol characteristics of specific rural and urban areas these characteristics include size, concentration, and vertical profile. A radiative transfer model based on an improved δ-Eddington approximation with 26 spectral intervals spanning the solar spectrum was then used to analyze the tropospheric heating rates associated with these different aerosol distributions. The authors compared heating rates forced by differences in surface albedo associated with different land-use characteristics, and found that tropospheric heating and surface cooling are sensitive to surface properties such as albedo

  6. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  7. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  8. Field testing of bioenergetic models

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1985-01-01

    Doubly labeled water provides a direct measure of the rate of carbon dioxide production by free-living animals. With appropriate conversion factors, based on chemical composition of the diet and assimilation efficiency, field metabolic rate (FMR), in units of energy expenditure, and field feeding rate can be estimated. Validation studies indicate that doubly labeled water measurements of energy metabolism are accurate to within 7% in reptiles, birds, and mammals. This paper discusses the use of doubly labeled water to generate empirical models for FMR and food requirements for a variety of animals

  9. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  10. Use of genotoxicity information in the development of integrated testing strategies (ITS) for skin sensitization.

    Science.gov (United States)

    Mekenyan, Ovanes; Patlewicz, Grace; Dimitrova, Gergana; Kuseva, Chanita; Todorov, Milen; Stoeva, Stoyanka; Kotov, Stefan; Donner, E Maria

    2010-10-18

    Skin sensitization is an end point of concern for various legislation in the EU, including the seventh Amendment to the Cosmetics Directive and Registration Evaluation, Authorisation and Restriction of Chemicals (REACH). Since animal testing is a last resort for REACH or banned (from 2013 onward) for the Cosmetics Directive, the use of intelligent/integrated testing strategies (ITS) as an efficient means of gathering necessary information from alternative sources (e.g., in vitro, (Q)SARs, etc.) is gaining widespread interest. Previous studies have explored correlations between mutagenicity data and skin sensitization data as a means of exploiting information from surrogate end points. The work here compares the underlying chemical mechanisms for mutagenicity and skin sensitization in an effort to evaluate the role mutagenicity information can play as a predictor of skin sensitization potential. The Tissue Metabolism Simulator (TIMES) hybrid expert system was used to compare chemical mechanisms of both end points since it houses a comprehensive set of established structure-activity relationships for both skin sensitization and mutagenicity. The evaluation demonstrated that there is a great deal of overlap between skin sensitization and mutagenicity structural alerts and their underlying chemical mechanisms. The similarities and differences in chemical mechanisms are discussed in light of available experimental data. A number of new alerts for mutagenicity were also postulated for inclusion into TIMES. The results presented show that mutagenicity information can provide useful insights on skin sensitization potential as part of an ITS and should be considered prior to any in vivo skin sensitization testing being initiated.

  11. [Test and programme sensitivities of screening for colorectal cancer in Reggio Emilia].

    Science.gov (United States)

    Campari, Cinzia; Sassatelli, Romano; Paterlini, Luisa; Camellini, Lorenzo; Menozzi, Patrizia; Cattani, Antonella

    2011-01-01

    to estimate the sensitivity of the immunochemical test for faecal occult blood (FOBT) and the sensitivity of the colorectal tumour screening programme in the province of Reggio Emilia. retrospective cohort study, including a sample of 80,357 people of both genders, aged 50-69, who underwent FOBT, during the first round of the screening programme in the province of Reggio Emilia, from April 2005 to December 2007. incidence of interval cancer. The proportional incidence method was used to estimate the sensitivity of FOBT and of the screening programme. Data were stratified according to gender, age and year of interval. the overall sensitivity of FOBT was 73.2% (95%IC 63.8-80.7). The sensitivity of FOBT was lower in females (70.5% vs 75.1%), higher in the 50-59 age group (78.6% vs 70.2%) and higher in the colon than rectum (75.1% vs 68.9%). The test had a significantly higher sensitivity in the 1st year of interval than in the 2nd (84.4% vs 60.5%; RR=0.39, 95%IC 0.22-0.70), a difference which was confirmed, also when data were stratified according to gender. The overall sensitivity of the programme is 70.9% (95%IC 61.5-78.5). No statistically significant differences were shown, if data were stratified according to gender, age or site. Again the sensitivity in the 1st year was significantly higher than in the 2nd year of interval (83.2% vs 57.0%; RR=0.41, 95%IC 0.24-0.69). Overall our data confirmed the findings of similar Italian studies, despite subgroup analysis showed some differences in sensitivity in our study.

  12. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  13. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  14. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  15. The diagnostic sensitivity of dengue rapid test assays is significantly enhanced by using a combined antigen and antibody testing approach.

    Directory of Open Access Journals (Sweden)

    Scott R Fry

    2011-06-01

    Full Text Available BACKGROUND: Serological tests for IgM and IgG are routinely used in clinical laboratories for the rapid diagnosis of dengue and can differentiate between primary and secondary infections. Dengue virus non-structural protein 1 (NS1 has been identified as an early marker for acute dengue, and is typically present between days 1-9 post-onset of illness but following seroconversion it can be difficult to detect in serum. AIMS: To evaluate the performance of a newly developed Panbio® Dengue Early Rapid test for NS1 and determine if it can improve diagnostic sensitivity when used in combination with a commercial IgM/IgG rapid test. METHODOLOGY: The clinical performance of the Dengue Early Rapid was evaluated in a retrospective study in Vietnam with 198 acute laboratory-confirmed positive and 100 negative samples. The performance of the Dengue Early Rapid in combination with the IgM/IgG Rapid test was also evaluated in Malaysia with 263 laboratory-confirmed positive and 30 negative samples. KEY RESULTS: In Vietnam the sensitivity and specificity of the test was 69.2% (95% CI: 62.8% to 75.6% and 96% (95% CI: 92.2% to 99.8 respectively. In Malaysia the performance was similar with 68.9% sensitivity (95% CI: 61.8% to 76.1% and 96.7% specificity (95% CI: 82.8% to 99.9% compared to RT-PCR. Importantly, when the Dengue Early Rapid test was used in combination with the IgM/IgG test the sensitivity increased to 93.0%. When the two tests were compared at each day post-onset of illness there was clear differentiation between the antigen and antibody markers. CONCLUSIONS: This study highlights that using dengue NS1 antigen detection in combination with anti-glycoprotein E IgM and IgG serology can significantly increase the sensitivity of acute dengue diagnosis and extends the possible window of detection to include very early acute samples and enhances the clinical utility of rapid immunochromatographic testing for dengue.

  16. TESTING GARCH-X TYPE MODELS

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2017-01-01

    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...... the nuisance parameters lie on the boundary of the parameter space, nor on lack of identification. Our general results on GARCH-X type models are applied to Gaussian based GARCH-X models, GARCH-X models with Student's t-distributed innovations as well as the integer-valued GARCH-X (PAR-X) models....

  17. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  18. Factors influencing antibiotic prescribing habits and use of sensitivity testing amongst veterinarians in Europe.

    Science.gov (United States)

    De Briyne, N; Atkinson, J; Pokludová, L; Borriello, S P; Price, S

    2013-11-16

    The Heads of Medicines Agencies and the Federation of Veterinarians of Europe undertook a survey to gain a better insight into the decision-making process of veterinarians in Europe when deciding which antibiotics to prescribe. The survey was completed by 3004 practitioners from 25 European countries. Analysis was to the level of different types of practitioner (food producing (FP) animals, companion animals, equines) and country for Belgium, Czech Republic, France, Germany, Spain, Sweden and the UK. Responses indicate no single information source is universally considered critical, though training, published literature and experience were the most important. Factors recorded which most strongly influenced prescribing behaviour were sensitivity tests, own experience, the risk for antibiotic resistance developing and ease of administration. Most practitioners usually take into account responsible use warnings. Antibiotic sensitivity testing is usually performed where a treatment failure has occurred. Significant differences were observed in the frequency of sensitivity testing at the level of types of practitioners and country. The responses indicate a need to improve sensitivity tests and services, with the availability of rapid and cheaper testing being key factors.

  19. Sensitivity and specificity of parallel or serial serological testing for detection of canine Leishmania infection

    Directory of Open Access Journals (Sweden)

    Mauro Maciel de Arruda

    2016-01-01

    Full Text Available In Brazil, human and canine visceral leishmaniasis (CVL caused byLeishmania infantum has undergone urbanisation since 1980, constituting a public health problem, and serological tests are tools of choice for identifying infected dogs. Until recently, the Brazilian zoonoses control program recommended enzyme-linked immunosorbent assays (ELISA and indirect immunofluorescence assays (IFA as the screening and confirmatory methods, respectively, for the detection of canine infection. The purpose of this study was to estimate the accuracy of ELISA and IFA in parallel or serial combinations. The reference standard comprised the results of direct visualisation of parasites in histological sections, immunohistochemical test, or isolation of the parasite in culture. Samples from 98 cases and 1,327 noncases were included. Individually, both tests presented sensitivity of 91.8% and 90.8%, and specificity of 83.4 and 53.4%, for the ELISA and IFA, respectively. When tests were used in parallel combination, sensitivity attained 99.2%, while specificity dropped to 44.8%. When used in serial combination (ELISA followed by IFA, decreased sensitivity (83.3% and increased specificity (92.5% were observed. Serial testing approach improved specificity with moderate loss in sensitivity. This strategy could partially fulfill the needs of public health and dog owners for a more accurate diagnosis of CVL.

  20. Respiratory panic disorder subtype and sensitivity to the carbon dioxide challenge test

    Directory of Open Access Journals (Sweden)

    A.M. Valença

    2002-07-01

    Full Text Available The aim of the present study was to verify the sensitivity to the carbon dioxide (CO2 challenge test of panic disorder (PD patients with respiratory and nonrespiratory subtypes of the disorder. Our hypothesis is that the respiratory subtype is more sensitive to 35% CO2. Twenty-seven PD subjects with or without agoraphobia were classified into respiratory and nonrespiratory subtypes on the basis of the presence of respiratory symptoms during their panic attacks. The tests were carried out in a double-blind manner using two mixtures: 1 35% CO2 and 65% O2, and 2 100% atmospheric compressed air, 20 min apart. The tests were repeated after 2 weeks during which the participants in the study did not receive any psychotropic drugs. At least 15 of 16 (93.7% respiratory PD subtype patients and 5 of 11 (43.4% nonrespiratory PD patients had a panic attack during one of two CO2 challenges (P = 0.009, Fisher exact test. Respiratory PD subtype patients were more sensitive to the CO2 challenge test. There was agreement between the severity of PD measured by the Clinical Global Impression (CGI Scale and the subtype of PD. Higher CGI scores in the respiratory PD subtype could reflect a greater sensitivity to the CO2 challenge due to a greater severity of PD. Carbon dioxide challenges in PD may define PD subtypes and their underlying mechanisms.

  1. Comparison of the sensitivities of the Buehler test and the guinea pig maximization test for predictive testing of contact allergy

    DEFF Research Database (Denmark)

    Frankild, S; Vølund, A; Wahlberg, J E

    2001-01-01

    International test guidelines, such as the Organisation for Economic Cooperation and Development (OECD) guideline #406, recommend 2 guinea pig methods for testing of the contact allergenic potential of chemicals: the Guinea Pig Maximization Test (GPMT) and the Buehler test. Previous comparisons...

  2. Development of an in vitro skin sensitization test based on ROS production in THP-1 cells.

    Science.gov (United States)

    Saito, Kazutoshi; Miyazawa, Masaaki; Nukada, Yuko; Sakaguchi, Hitoshi; Nishiyama, Naohiro

    2013-03-01

    Recently, it has been reported that reactive oxygen species (ROS) produced by contact allergens can affect dendritic cell migration and contact hypersensitivity. The aim of the present study was to develop a new in vitro assay that could predict the skin sensitizing potential of chemicals by measuring ROS production in THP-1 (human monocytic leukemia cell line) cells. THP-1 cells were pre-loaded with a ROS sensitive fluorescent dye, 5-(and 6-)-chloromethyl-2', 7'-dichlorodihydrofluorescein diacetate, acetyl ester (CM-H2DCFDA), for 15min, then incubated with test chemicals for 30min. The fluorescence intensity was measured by flow cytometry. For the skin sensitizers, 25 out of 30 induced over a 2-fold ROS production at more than 90% of cell viability. In contrast, increases were only seen in 4 out of 20 non-sensitizers. The overall accuracy for the local lymph node assay (LLNA) was 82% for 50 chemicals tested. A correlation was found between the estimated concentration showing 2-fold ROS production in the ROS assay and the EC3 values (estimated concentration required to induce positive response) of the LLNA. These results indicated that the THP-1 cell-based ROS assay was a rapid and highly sensitive detection system able to predict skin sensitizing potentials and potency of chemicals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Central sensitization phenomena after third molar surgery: A quantitative sensory testing study

    DEFF Research Database (Denmark)

    Jensen, T.S.; Norholt, S.E.; Svensson, P.

    2008-01-01

    Background: Surgical removal of third molars may carry a risk of developing persistent orofacial pain, and central sensitization appears to play an important role in the transition from acute to chronic pain. Aim: The aim of this study was to investigate sensitization (primarily central sensitiza......Background: Surgical removal of third molars may carry a risk of developing persistent orofacial pain, and central sensitization appears to play an important role in the transition from acute to chronic pain. Aim: The aim of this study was to investigate sensitization (primarily central...... sensitization) after orofacial trauma using quantitative sensory testing (QST). Methods: A total of 32 healthy men (16 patients and 16 age-matched control subjects) underwent a battery of quantitative tests adapted to the trigeminal area at baseline and 2, 7, and 30 days following surgical removal of a lower...... impacted third molar. Results: Central sensitization for at least one week was indicated by significantly increased pain intensity evoked by intraoral repetitive pinprick and electrical stimulation (p

  4. Short ensembles: an efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Directory of Open Access Journals (Sweden)

    H. Wan

    2014-09-01

    Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of

  5. Application of a path sensitizing method on automated generation of test specifications for control software

    International Nuclear Information System (INIS)

    Morimoto, Yuuichi; Fukuda, Mitsuko

    1995-01-01

    An automated generation method for test specifications has been developed for sequential control software in plant control equipment. Sequential control software can be represented as sequential circuits. The control software implemented in a control equipment is designed from these circuit diagrams. In logic tests of VLSI's, path sensitizing methods are widely used to generate test specifications. But the method generates test specifications at a single time only, and can not be directly applied to sequential control software. The basic idea of the proposed method is as follows. Specifications of each logic operator in the diagrams are defined in the software design process. Therefore, test specifications of each operator in the control software can be determined from these specifications, and validity of software can be judged by inspecting all of the operators in the logic circuit diagrams. Candidates for sensitized paths, on which test data for each operator propagates, can be generated by the path sensitizing method. To confirm feasibility of the method, it was experimentally applied to control software in digital control equipment. The program could generate test specifications exactly, and feasibility of the method was confirmed. (orig.) (3 refs., 7 figs.)

  6. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    Science.gov (United States)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  7. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  8. Sensitivity and specificity of the AdenoPlus test for diagnosing adenoviral conjunctivitis.

    Science.gov (United States)

    Sambursky, Robert; Trattler, William; Tauber, Shachar; Starr, Christopher; Friedberg, Murray; Boland, Thomas; McDonald, Marguerite; DellaVecchia, Michael; Luchs, Jodi

    2013-01-01

    To compare the clinical sensitivity and specificity of the AdenoPlus test with those of both viral cell culture (CC) with confirmatory immunofluorescence assay (IFA) and polymerase chain reaction (PCR) at detecting the presence of adenovirus in tear fluid. A prospective, sequential, masked, multicenter clinical trial enrolled 128 patients presenting with a clinical diagnosis of acute viral conjunctivitis from a combination of 8 private ophthalmology practices and academic centers. Patients were tested with AdenoPlus, CC-IFA, and PCR to detect the presence of adenovirus. The sensitivity and specificity of AdenoPlus were assessed for identifying cases of adenoviral conjunctivitis. Of the 128 patients enrolled, 36 patients' results were found to be positive by either CC-IFA or PCR and 29 patients' results were found to be positive by both CC-IFA and PCR. When compared only with CC-IFA, AdenoPlus showed a sensitivity of 90% (28/31) and specificity of 96% (93/97). When compared only with PCR, AdenoPlus showed a sensitivity of 85% (29/34) and specificity of 98% (89/91). When compared with both CC-IFA and PCR, AdenoPlus showed a sensitivity of 93% (27/29) and specificity of 98% (88/90). When compared with PCR, CC-IFA showed a sensitivity of 85% (29/34) and specificity of 99% (90/91). AdenoPlus is sensitive and specific at detecting adenoviral conjunctivitis. An accurate and rapid in-office test can prevent the misdiagnosis of adenoviral conjunctivitis that leads to the spread of disease, unnecessary antibiotic use, and increased health care costs. Additionally, AdenoPlus may help a clinician make a more informed treatment decision regarding the use of novel therapeutics. clinicaltrials.gov Identifier: NCT00921895.

  9. EU-approved rapid tests for bovine spongiform encephalopathy detect atypical forms: a study for their sensitivities.

    Directory of Open Access Journals (Sweden)

    Daniela Meloni

    Full Text Available Since 2004 it become clear that atypical bovine spongiform encephalopthies (BSEs exist in cattle. Whenever their detection has relied on active surveillance plans implemented in Europe since 2001 by rapid tests, the overall and inter-laboratory performance of these diagnostic systems in the detection of the atypical strains has not been studied thoroughly to date. To fill this gap, the present study reports on the analytical sensitivity of the EU-approved rapid tests for atypical L- and H-type and classical BSE in parallel. Each test was challenged with two dilution series, one created from a positive pool of the three BSE forms according to the EURL standard method of homogenate preparation (50% w/v and the other as per the test kit manufacturer's instructions. Multilevel logistic models and simple logistic models with the rapid test as the only covariate were fitted for each BSE form analyzed as directed by the test manufacturer's dilution protocol. The same schemes, but excluding the BSE type, were then applied to compare test performance under the manufacturer's versus the water protocol. The IDEXX HerdChek ® BSE-scrapie short protocol test showed the highest sensitivity for all BSE forms. The IDEXX® HerdChek BSE-scrapie ultra short protocol, the Prionics®--Check WESTERN and the AJ Roboscreen® BetaPrion tests showed similar sensitivities, followed by the Roche® PrionScreen, the Bio-Rad® TeSeE™ SAP and the Prionics®--Check PrioSTRIP in descending order of analytical sensitivity. Despite these differences, the limit of detection of all seven rapid tests against the different classes of material set within a 2 log(10 range of the best-performing test, thus meeting the European Food Safety Authority requirement for BSE surveillance purposes. These findings indicate that not many atypical cases would have been missed surveillance since 2001 which is important for further epidemiological interpretations of the sporadic character of

  10. Computational modeling and sensitivity in uniform DT burn

    International Nuclear Information System (INIS)

    Hansen, Jon; Hryniw, Natalia; Kesler, Leigh A.; Li, Frank; Vold, Erik

    2010-01-01

    Understanding deuterium-tritium (DT) fusion is essential to achieving ignition in inertial confinement fusion. A burning DT plasma in a three temperature (3T) approximation and uniform in space is modeled as a system of five non-linear coupled ODEs. Special focus is given to the effects of Compton coupling, Planck opacity, and electron-ion coupling terms. Semi-implicit differencing is used to solve the system of equations. Time step size is varied to examine the stability and convergence of each solution. Data from NDI, SESAME, and TOPS databases is extracted to create analytic fits for the reaction rate parameter, the Planck opacity, and the coupling frequencies of the plasma temperatures. The impact of different high order fits to NDI date (the reaction rate parameter), and using TOPS versus SESAME opacity data is explored, and the sensitivity to several physics parameters in the coupling terms are also examined. The base model recovers the accepted 3T results for the temperature and burn histories. The Compton coupling is found to have a significant impact on the results. Varying a coefficient of this term shows that the model results can give reasonably good agreement with the peak temperatures reported in multi-group results as well as the accepted 3T results. The base model assumes a molar density of 1 mol/cm 3 , as well as a 5 keV intial temperature for all three temperatures. Different intial conditions are explored as well. Intial temperatures are set to 1 and 3 keV, the ratio of D to T is varied (2 and 3 as opposed to 1 in the base model), and densities are set to 10 mol/cm 3 and 100 mol/cm 3 . Again varying the Compton coefficient, the ion temperature results in the higher density case are in reasonable agreement with a recently published kinetic model.

  11. Computational modeling and sensitivity in uniform DT burn

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jon [Los Alamos National Laboratory; Hryniw, Natalia [Los Alamos National Laboratory; Kesler, Leigh A [Los Alamos National Laboratory; Li, Frank [Los Alamos National Laboratory; Vold, Erik [Los Alamos National Laboratory

    2010-01-01

    Understanding deuterium-tritium (DT) fusion is essential to achieving ignition in inertial confinement fusion. A burning DT plasma in a three temperature (3T) approximation and uniform in space is modeled as a system of five non-linear coupled ODEs. Special focus is given to the effects of Compton coupling, Planck opacity, and electron-ion coupling terms. Semi-implicit differencing is used to solve the system of equations. Time step size is varied to examine the stability and convergence of each solution. Data from NDI, SESAME, and TOPS databases is extracted to create analytic fits for the reaction rate parameter, the Planck opacity, and the coupling frequencies of the plasma temperatures. The impact of different high order fits to NDI date (the reaction rate parameter), and using TOPS versus SESAME opacity data is explored, and the sensitivity to several physics parameters in the coupling terms are also examined. The base model recovers the accepted 3T results for the temperature and burn histories. The Compton coupling is found to have a significant impact on the results. Varying a coefficient of this term shows that the model results can give reasonably good agreement with the peak temperatures reported in multi-group results as well as the accepted 3T results. The base model assumes a molar density of 1 mol/cm{sup 3}, as well as a 5 keV intial temperature for all three temperatures. Different intial conditions are explored as well. Intial temperatures are set to 1 and 3 keV, the ratio of D to T is varied (2 and 3 as opposed to 1 in the base model), and densities are set to 10 mol/cm{sup 3} and 100 mol/cm{sup 3}. Again varying the Compton coefficient, the ion temperature results in the higher density case are in reasonable agreement with a recently published kinetic model.

  12. Antigen spot test (AST): a highly sensitive assay for the detection of antibodies

    Energy Technology Data Exchange (ETDEWEB)

    Herbrink, P; van Bussel, F J; Warnaar, S O [Rijksuniversiteit Leiden (Netherlands)

    1982-02-12

    A method is described for detection of antibodies by means of nitrocellulose or diazobenzyloxymethyl (DBM) paper on which various antigens have been spotted. The sensitivity of this antigen spot test (AST) is comparable with that of RIA and ELISA. The method requires only nanogram amounts of antigen. Since a variety of antigens can be spotted on a single piece of nitrocellulose or DBM paper, this antigen spot test is especially useful for specificity controls on antibodies.

  13. In Vitro Drug Sensitivity Tests to Predict Molecular Target Drug Responses in Surgically Resected Lung Cancer.

    Directory of Open Access Journals (Sweden)

    Ryohei Miyazaki

    Full Text Available Epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs and anaplastic lymphoma kinase (ALK inhibitors have dramatically changed the strategy of medical treatment of lung cancer. Patients should be screened for the presence of the EGFR mutation or echinoderm microtubule-associated protein-like 4 (EML4-ALK fusion gene prior to chemotherapy to predict their clinical response. The succinate dehydrogenase inhibition (SDI test and collagen gel droplet embedded culture drug sensitivity test (CD-DST are established in vitro drug sensitivity tests, which may predict the sensitivity of patients to cytotoxic anticancer drugs. We applied in vitro drug sensitivity tests for cyclopedic prediction of clinical responses to different molecular targeting drugs.The growth inhibitory effects of erlotinib and crizotinib were confirmed for lung cancer cell lines using SDI and CD-DST. The sensitivity of 35 cases of surgically resected lung cancer to erlotinib was examined using SDI or CD-DST, and compared with EGFR mutation status.HCC827 (Exon19: E746-A750 del and H3122 (EML4-ALK cells were inhibited by lower concentrations of erlotinib and crizotinib, respectively than A549, H460, and H1975 (L858R+T790M cells were. The viability of the surgically resected lung cancer was 60.0 ± 9.8 and 86.8 ± 13.9% in EGFR-mutants vs. wild types in the SDI (p = 0.0003. The cell viability was 33.5 ± 21.2 and 79.0 ± 18.6% in EGFR mutants vs. wild-type cases (p = 0.026 in CD-DST.In vitro drug sensitivity evaluated by either SDI or CD-DST correlated with EGFR gene status. Therefore, SDI and CD-DST may be useful predictors of potential clinical responses to the molecular anticancer drugs, cyclopedically.

  14. Estimation of sensitivity, specificity and predictive values of two serologic tests for the detection of antibodies against Actinobacillus pleuropneumoniae serotype 2 in the absence of a reference test (gold standard)

    DEFF Research Database (Denmark)

    Enøe, Claes; Andersen, Søren; Sørensen, Vibeke

    2001-01-01

    Latent-class models were used to determine the sensitivity, specificity and predictive values of a polyclonal blocking enzyme-linked immunosorbent assay (ELISA) and a modified complement-fixation test (CFT) when there was no reference test. The tests were used for detection of antibodies against ...

  15. Sensitivity and specificity of the 3-item memory test in the assessment of post traumatic amnesia.

    NARCIS (Netherlands)

    Andriessen, T.M.J.C.; Jong, B. de; Jacobs, B.; Werf, S.P. van der; Vos, P.E.

    2009-01-01

    PRIMARY OBJECTIVE: To investigate how the type of stimulus (pictures or words) and the method of reproduction (free recall or recognition after a short or a long delay) affect the sensitivity and specificity of a 3-item memory test in the assessment of post traumatic amnesia (PTA). METHODS: Daily

  16. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    Science.gov (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  17. Deformation modeling and the strain transient dip test

    International Nuclear Information System (INIS)

    Jones, W.B.; Rohde, R.W.; Swearengen, J.C.

    1980-01-01

    Recent efforts in material deformation modeling reveal a trend toward unifying creep and plasticity with a single rate-dependent formulation. While such models can describe actual material deformation, most require a number of different experiments to generate model parameter information. Recently, however, a new model has been proposed in which most of the requisite constants may be found by examining creep transients brought about through abrupt changes in creep stress (strain transient dip test). The critical measurement in this test is the absence of a resolvable creep rate after a stress drop. As a consequence, the result is extraordinarily sensitive to strain resolution as well as machine mechanical response. This paper presents the design of a machine in which these spurious effects have been minimized and discusses the nature of the strain transient dip test using the example of aluminum. It is concluded that the strain transient dip test is not useful as the primary test for verifying any micromechanical model of deformation. Nevertheless, if a model can be developed which is verifiable by other experimentts, data from a dip test machine may be used to generate model parameters

  18. History and sensitivity comparison of the Spirodela polyrhiza microbiotest and Lemna toxicity tests

    Directory of Open Access Journals (Sweden)

    Baudo R.

    2015-01-01

    Full Text Available The history of toxicity tests with duckweeds shows that these assays with free-floating aquatic angiosperms are gaining increasing attention in ecotoxicological research and applications. Standard tests have been published by national and international organizations, mainly with the test species Lemna minor and Lemna gibba. Besides the former two test species the great duckweed Spirodela polyrhiza is to date also regularly used in duckweed testing. Under unfavorable environmental conditions, the latter species produces dormant stages (turions and this has triggered the attention of two research groups from Belgium and Greece to jointly develop a “stock culture independent” microbiotest with S. polyrhiza. A 72 h new test has been worked out which besides its independence of stock culturing and maintenance of live stocks is very simple and practical to perform, and much less demanding in space and time than the conventional duckweed tests. Extensive International Interlaboratory Comparisons on the S. polyrhiza microbiotest showed its robustness and reliability and triggered the decision to propose this new assay to the ISO for endorsement and publication as a standard toxicity test for duckweeds. Sensitivity comparison of the 72 h S. polyrhiza microbiotest with the 7d L. minor assay for 22 compounds belonging to different groups of chemicals revealed that based on growth as the effect criterion both duckweed assays have a similar sensitivity. Taking into account its multiple advantages and assets, the S. polyrhiza microbiotest is a reliable and attractive alternative to the conventional duckweed tests.

  19. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA.

    Directory of Open Access Journals (Sweden)

    Martin T Schultz

    Full Text Available The environmental DNA (eDNA method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1 collection of a filtered water sample from the source; 2 extraction of DNA from the filter and isolation in a purified elution; 3 removal of aliquots from the elution for use in the polymerase chain reaction (PCR assay; 4 PCR; and 5 genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis and silver carp (H. molitrix assuming sampling protocols used in the Chicago Area Waterway System (CAWS. Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration

  20. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  1. Revisiting the radionuclide atmospheric dispersion event of the Chernobyl disaster - modelling sensitivity and data assimilation

    Science.gov (United States)

    Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor

    2013-04-01

    A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be

  2. Sensitivity of modeled ozone concentrations to uncertainties in biogenic emissions

    International Nuclear Information System (INIS)

    Roselle, S.J.

    1992-06-01

    The study examines the sensitivity of regional ozone (O3) modeling to uncertainties in biogenic emissions estimates. The United States Environmental Protection Agency's (EPA) Regional Oxidant Model (ROM) was used to simulate the photochemistry of the northeastern United States for the period July 2-17, 1988. An operational model evaluation showed that ROM had a tendency to underpredict O3 when observed concentrations were above 70-80 ppb and to overpredict O3 when observed values were below this level. On average, the model underpredicted daily maximum O3 by 14 ppb. Spatial patterns of O3, however, were reproduced favorably by the model. Several simulations were performed to analyze the effects of uncertainties in biogenic emissions on predicted O3 and to study the effectiveness of two strategies of controlling anthropogenic emissions for reducing high O3 concentrations. Biogenic hydrocarbon emissions were adjusted by a factor of 3 to account for the existing range of uncertainty in these emissions. The impact of biogenic emission uncertainties on O3 predictions depended upon the availability of NOx. In some extremely NOx-limited areas, increasing the amount of biogenic emissions decreased O3 concentrations. Two control strategies were compared in the simulations: (1) reduced anthropogenic hydrocarbon emissions, and (2) reduced anthropogenic hydrocarbon and NOx emissions. The simulations showed that hydrocarbon emission controls were more beneficial to the New York City area, but that combined NOx and hydrocarbon controls were more beneficial to other areas of the Northeast. Hydrocarbon controls were more effective as biogenic hydrocarbon emissions were reduced, whereas combined NOx and hydrocarbon controls were more effective as biogenic hydrocarbon emissions were increased

  3. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  4. Sensitivity of screening-level toxicity tests using soils from a former petroleum refinery

    International Nuclear Information System (INIS)

    Pauwels, S.; Bureau, J.; Roy, Y.; Allen, B.; Robidoux, P.Y.; Soucy, M.

    1995-01-01

    The authors tested five composite soil samples from a former refinery. The samples included a reference soil (Mineral Oil and Grease, MO and G < 40 ppm), thermally-treated soil, biotreated soil, and two untreated soils. They evaluated toxicity using the earthworm E. foetida, lettuce, cress, barley, Microtox, green algae, fathead minnow, and D. magna. The endpoints measured were lethality, seed germination, root elongation, growth, and bioluminescence. Toxicity, as measured by the number of positive responses, increased as follows: biotreated soil < untreated soil No. 1 < reference soil < thermally-treated soil and untreated soil No. 2. The biotreated soil generated only one positive response, whereas the thermally-treated soil and untreated soil No. 2 generated five positive responses. The most sensitive and discriminant terrestrial endpoint was lettuce root elongation which responded to untreated soil No. 1, thermally-treated soil, and reference soil. The least sensitive was barley seed germination for which no toxicity was detected. The most sensitive and discriminant aquatic endpoint was green algae growth which responded to untreated soil No. 1, thermally-treated soil, and reference soil. The least sensitive was D. magna for which no toxicity was detected. Overall, soil and aqueous extract toxicity was spotty and no consistent patterns emerged to differentiate the five soils. Biotreatment significantly reduced the effects of the contamination. Aqueous toxicity was measured in the reference soil, probably because of the presence of unknown dissolved compounds in the aqueous extract. Finally, clear differences in sensitivity existed among the test species

  5. Evaluation of the performance of the reduced local lymph node assay for skin sensitization testing.

    Science.gov (United States)

    Ezendam, Janine; Muller, Andre; Hakkert, Betty C; van Loveren, Henk

    2013-06-01

    The local lymph node assay (LLNA) is the preferred method for classification of sensitizers within REACH. To reduce the number of mice for the identification of sensitizers the reduced LLNA was proposed, which uses only the high dose group of the LLNA. To evaluate the performance of this method for classification, LLNA data from REACH registrations were used and classification based on all dose groups was compared to classification based on the high dose group. We confirmed previous examinations of the reduced LLNA showing that this method is less sensitive compared to the LLNA. The reduced LLNA misclassified 3.3% of the sensitizers identified in the LLNA and misclassification occurred in all potency classes and that there was no clear association with irritant properties. It is therefore not possible to predict beforehand which substances might be misclassified. Another limitation of the reduced LLNA is that skin sensitizing potency cannot be assessed. For these reasons, it is not recommended to use the reduced LLNA as a stand-alone assay for skin sensitization testing within REACH. In the future, the reduced LLNA might be of added value in a weight of evidence approach to confirm negative results obtained with non-animal approaches. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  7. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  8. Four nondestructive electrochemical tests for detecting sensitization in type 304 and 304L stainless steels

    International Nuclear Information System (INIS)

    Majidi, A.P.; Streicher, A.

    1986-01-01

    Three different electrochemical reactivation tests are compared with etch structures produced in the electrolytic oxalic acid etch test. These nondestructive tests are needed to evaluate welded stainless steel pipes and other plant equipment for susceptibility to intergranular attack. Sensitization associated with precipitates of chromium carbides at grain boundaries can make these materials subject to intergranular attack in acids and, in particular, to intergranular stress corrosion cracking in high-temperature (289 0 C) water on boiling water nuclear reactor power plants. In the first of the two older reactivation tests, sensitization is detected by the electrical charge generated during reactivation. In the second, it is measured by the ratio of maximum currents generated by a prior anodic loop and the reactivation loop. A third, simpler reactivation method based on a measurement of the maximum current generated during reactivation is proposed. If the objective of the field tests, which are to be carried out with portable equipment, is to distinguish between nonsensitized and sensitized material, this can be accomplished most simply, most rapidly, and at lowest cost by an evaluation of oxalic acid etch structures

  9. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  10. A new framework for the interpretation of IgE sensitization tests

    DEFF Research Database (Denmark)

    Roberts, G; Ollert, M; Aalberse, R.

    2016-01-01

    tests to make a definitive diagnosis; these are often expensive and potentially associated with severe reactions. The likelihood of clinical allergy can be semi-quantified from an IgE sensitization test results. This relationship varies though according to the patients' age, ethnicity, nature...... of the putative allergic reaction and coexisting clinical diseases such as eczema. The likelihood of clinical allergy can be more precisely estimated from an IgE sensitization test result, by taking into account the patient's presenting features (pretest probability). The presence of each of these patient...... pretest probabilities for diverse setting, regions and allergens. Also, cofactors, such as exercise, may be necessary for exposure to an allergen to result in an allergic reaction in specific IgE-positive patients. The diagnosis of IgE-mediated allergy is now being aided by the introduction of allergen...

  11. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases...... suggest that our method provides a sound foundation for rapid development of high quality system models....

  12. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  13. Modelling the impact of Water Sensitive Urban Design technologies on the urban water cycle

    DEFF Research Database (Denmark)

    Locatelli, Luca

    Alternative stormwater management approaches for urban developments, also called Water Sensitive Urban Design (WSUD), are increasingly being adopted with the aims of providing flood control, flow management, water quality improvements and opportunities to harvest stormwater for non-potable uses....... To model the interaction of infiltration based WSUDs with groundwater. 4. To assess a new combination of different WSUD techniques for improved stormwater management. 5. To model the impact of a widespread implementation of multiple soakaway systems at the catchment scale. 6. Test the models by simulating...... the hydrological performance of single devices relevant for urban drainage applications. Moreover, the coupling of soakaway and detention storages is also modeled to analyze the benefits of combining different local stormwater management systems. These models are then integrated into urban drainage network models...

  14. Biodegradable Polymers Induce CD54 on THP-1 Cells in Skin Sensitization Test.

    Science.gov (United States)

    Jung, Yeon Suk; Kato, Reiko; Tsuchiya, Toshie

    2011-01-01

    Currently, nonanimal methods of skin sensitization testing for various chemicals, biodegradable polymers, and biomaterials are being developed in the hope of eliminating the use of animals. The human cell line activation test (h-CLAT) is a skin sensitization assessment that mimics the functions of dendritic cells (DCs). DCs are specialized antigen-presenting cells, and they interact with T cells and B cells to initiate immune responses. Phenotypic changes in DCs, such as the production of CD86 and CD54 and internalization of MHC class II molecules, have become focal points of the skin sensitization test. In this study, we used h-CLAT to assess the effects of biodegradable polymers. The results showed that several biodegradable polymers increased the expression of CD54, and the relative skin sensitizing abilities of biodegradable polymers were PLLG (75 : 25) < PLLC (40 : 60) < PLGA (50 : 50) < PCG (50 : 50). These results may contribute to the creation of new guidelines for the use of biodegradable polymers in scaffolds or allergenic hazards.

  15. Biodegradable Polymers Induce CD54 on THP-1 Cells in Skin Sensitization Test

    Directory of Open Access Journals (Sweden)

    Yeon Suk Jung

    2011-01-01

    Full Text Available Currently, nonanimal methods of skin sensitization testing for various chemicals, biodegradable polymers, and biomaterials are being developed in the hope of eliminating the use of animals. The human cell line activation test (h-CLAT is a skin sensitization assessment that mimics the functions of dendritic cells (DCs. DCs are specialized antigen-presenting cells, and they interact with T cells and B cells to initiate immune responses. Phenotypic changes in DCs, such as the production of CD86 and CD54 and internalization of MHC class II molecules, have become focal points of the skin sensitization test. In this study, we used h-CLAT to assess the effects of biodegradable polymers. The results showed that several biodegradable polymers increased the expression of CD54, and the relative skin sensitizing abilities of biodegradable polymers were PLLG (75 : 25 < PLLC (40 : 60 < PLGA (50 : 50 < PCG (50 : 50. These results may contribute to the creation of new guidelines for the use of biodegradable polymers in scaffolds or allergenic hazards.

  16. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  17. Is there a risk of active sensitization to PPD by patch testing the general population?

    Science.gov (United States)

    Thyssen, Jacob Pontoppidan; Menné, Torkil; Nielsen, Niels Henrik; Linneberg, Allan

    2007-08-01

    Para-phenylenediamine (PPD), a constituent of permanent hair dyes, may cause contact allergy in exposed individuals. It has previously been questioned whether a patch testing with PPD in population-based epidemiological studies is entirely safe. The Glostrup allergy studies patch tested the same cohort twice. In 1990, 567 persons were patch-tested and only one person had a (+) positive reaction to PPD. In 1998, 540 persons were re-invited to a new patch test and 365 (participation rate 68%) were re-tested. There were no positive reactions to PPD. These studies indicate that patch testing with PPD in individuals with no previous positive reactions to PPD or with only one previous positive reaction does not cause active sensitization and can be performed with minimal risk.

  18. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    Science.gov (United States)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  19. Sensitive technique for detecting outer defect on tube with remote field eddy current testing

    International Nuclear Information System (INIS)

    Kobayashi, Noriyasu; Nagai, Satoshi; Ochiai, Makoto; Jimbo, Noboru; Komai, Masafumi

    2008-01-01

    In the remote field eddy current testing, we proposed the method of enhancing the magnetic flux density in the vicinity of an exciter coil by controlling the magnetic flux direction for increasing the sensitivity of detecting outer defects on a tube and used the flux guide made of a magnetic material for the method. The optimum structural shape of the flux guide was designed by the magnetic field analysis. On the experiment with the application of the flux guide, the magnetic flux density increased by 59% and the artificial defect detection signal became clear. We confirmed the proposed method was effective in a high sensitivity. (author)

  20. Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis.

    Science.gov (United States)

    Fischer, Agneta H; Kret, Mariska E; Broekens, Joost

    2018-01-01

    Previous meta-analyses and reviews on gender differences in emotion recognition have shown a small to moderate female advantage. However, inconsistent evidence from recent studies has raised questions regarding the implications of different methodologies, stimuli, and samples. In the present research based on a community sample of more than 5000 participants, we tested the emotional sensitivity hypothesis, stating that women are more sensitive to perceive subtle, i.e. low intense or ambiguous, emotion cues. In addition, we included a self-report emotional intelligence test in order to examine any discrepancy between self-perceptions and actual performance for both men and women. We used a wide range of stimuli and models, displaying six different emotions at two different intensity levels. In order to better tap sensitivity for subtle emotion cues, we did not use a forced choice format, but rather intensity measures of different emotions. We found no support for the emotional sensitivity account, as both genders rated the target emotions as similarly intense at both levels of stimulus intensity. Men, however, more strongly perceived non-target emotions to be present than women. In addition, we also found that the lower scores of men in self-reported EI was not related to their actual perception of target emotions, but it was to the perception of non-target emotions.

  1. Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis

    Science.gov (United States)

    Kret, Mariska E.; Broekens, Joost

    2018-01-01

    Previous meta-analyses and reviews on gender differences in emotion recognition have shown a small to moderate female advantage. However, inconsistent evidence from recent studies has raised questions regarding the implications of different methodologies, stimuli, and samples. In the present research based on a community sample of more than 5000 participants, we tested the emotional sensitivity hypothesis, stating that women are more sensitive to perceive subtle, i.e. low intense or ambiguous, emotion cues. In addition, we included a self-report emotional intelligence test in order to examine any discrepancy between self-perceptions and actual performance for both men and women. We used a wide range of stimuli and models, displaying six different emotions at two different intensity levels. In order to better tap sensitivity for subtle emotion cues, we did not use a forced choice format, but rather intensity measures of different emotions. We found no support for the emotional sensitivity account, as both genders rated the target emotions as similarly intense at both levels of stimulus intensity. Men, however, more strongly perceived non-target emotions to be present than women. In addition, we also found that the lower scores of men in self-reported EI was not related to their actual perception of target emotions, but it was to the perception of non-target emotions. PMID:29370198

  2. An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.

    Science.gov (United States)

    Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C

    2013-08-01

    To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.

  3. Contrast sensitivity measured by two different test methods in healthy, young adults with normal visual acuity.

    Science.gov (United States)

    Koefoed, Vilhelm F; Baste, Valborg; Roumes, Corinne; Høvding, Gunnar

    2015-03-01

    This study reports contrast sensitivity (CS) reference values obtained by two different test methods in a strictly selected population of healthy, young adults with normal uncorrected visual acuity. Based on these results, the index of contrast sensitivity (ICS) is calculated, aiming to establish ICS reference values for this population and to evaluate the possible usefulness of ICS as a tool to compare the degree of agreement between different CS test methods. Military recruits with best eye uncorrected visual acuity 0.00 LogMAR or better, normal colour vision and age 18-25 years were included in a study to record contrast sensitivity using Optec 6500 (FACT) at spatial frequencies of 1.5, 3, 6, 12 and 18 cpd in photopic and mesopic light and CSV-1000E at spatial frequencies of 3, 6, 12 and 18 cpd in photopic light. Index of contrast sensitivity was calculated based on data from the three tests, and the Bland-Altman technique was used to analyse the agreement between ICS obtained by the different test methods. A total of 180 recruits were included. Contrast sensitivity frequency data for all tests were highly skewed with a marked ceiling effect for the photopic tests. The median ICS for Optec 6500 at 85 cd/m2 was -0.15 (95% percentile 0.45), compared with -0.00 (95% percentile 1.62) for Optec at 3 cd/m2 and 0.30 (95% percentile 1.20) FOR CSV-1000E. The mean difference between ICSFACT 85 and ICSCSV was -0.43 (95% CI -0.56 to -0.30, p<0.00) with limits of agreement (LoA) within -2.10 and 1.22. The regression line on the difference of average was near to zero (R2=0.03). The results provide reference CS and ICS values in a young, adult population with normal visual acuity. The agreement between the photopic tests indicated that they may be used interchangeably. There was little agreement between the mesopic and photopic tests. The mesopic test seemed best suited to differentiate between candidates and may therefore possibly be useful for medical selection purposes.

  4. Test-retest reliability and sensitivity of the 20-meter walk test among patients with knee osteoarthritis.

    Science.gov (United States)

    Motyl, Jillian M; Driban, Jeffrey B; McAdams, Erica; Price, Lori Lyn; McAlindon, Timothy E

    2013-05-10

    The 20-meter walk test is a physical function measure commonly used in clinical research studies and rehabilitation clinics to measure gait speed and monitor changes in patients' physical function over time. Unfortunately, the reliability and sensitivity of this walk test are not well defined and, therefore, limit our ability to evaluate real changes in gait speed not attributable to normal variability. The aim of this study was to assess the test-restest reliability and sensitivity of the 20-meter walk test, at a self-selected pace, among patients with mild to moderate knee osteoarthritis (OA) and to suggest a standardized protocol for future test administration. This was a measurement reliability study. Fifteen consecutive people enrolled in a randomized-controlled trial of intra-articular corticosteroid injections for knee OA participated in this study. All participants completed 4 trials on 2 separate days, 7 to 21 days apart (8 trials total). Each day was divided into 2 sessions, which each involved 2 walking trials. We compared walk times between trials with Wilcoxon signed-rank tests. Similar analyses compared average walk times between sessions. To confirm these analyses, we also calculated Spearman correlation coefficients to assess the relationship between sessions. Finally, smallest detectable differences (SDD) were calculated to estimate the sensitivity of the 20-meter walk test. Wilcoxon signed-rank tests between trials within the same session demonstrated that trials in session 1 were significantly different and in the subsequent 3 sessions, the median differences between trials were not significantly different. Therefore, the first session of each day was considered a practice session, and the SDD between the second session of each day were calculated. SDD was -1.59 seconds (walking slower) and 0.15 seconds (walking faster). Practice trials and a standardized protocol should be used in administration of the 20-meter walk test. Changes in walk time

  5. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  6. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  7. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  8. Relevance of Cat and Dog Sensitization by Skin Prick Testing in Childhood Eczema and Asthma.

    Science.gov (United States)

    Hon, Kam Lun; Tsang, Kathy Yin Ching; Pong, Nga Hin Henry; Leung, Ting Fan

    2017-01-01

    Household animal dander has been implicated as aeroallergen in childhood atopic diseases. Many parents seek healthcare advice if household pet keeping may be detrimental in atopic eczema (AE), allergic rhinitis and asthma. We investigated if skin sensitization by cat/dog dander was associated with disease severity and quality of life in children with AE. Demographics, skin prick test (SPT) results, disease severity (Nottingham eczema severity score NESS), Children Dermatology Life Quality Index (CDLQI), blood IgE and eosinophil counts of a cohort of AE patients were reviewed. 325 AE patients followed at a pediatric dermatology clinic were evaluated. Personal history of asthma was lowest (20%) in the dog-dander-positive-group but highest (61%) in bothcat- and-dog-dander-positive group (p=0.007). Binomial logistic regression ascertained that catdander sensitization was associated with increasing age (adjusted odds ratio [aOR], 1.056; 95% Confidence Interval [CI], 1.006 to 1.109; p=0.029), dust-mite sensitization (aOR, 4.625; 95% CI, 1.444 to 14.815; p=0.010), food-allergen sensitization (aOR, 2.330; 95% CI, 1.259 to 4.310; p=0.007) and keeping-cat-ever (aOR, 7.325; 95% CI, 1.193 to 44.971; p=0.032); whereas dogdander sensitization was associated with dust-mite sensitization (aOR, 9.091; 95% CI, 1.148 to 71.980; p=0.037), food-allergen sensitization (aOR, 3.568; 95% CI, 1.341 to 9.492; p=0.011) and keeping-dog-ever (aOR, 6.809; 95% CI, 2.179 to 21.281; p=0.001). However, neither cat nor dog sensitization were associated with asthma, allergic rhinitis, parental or sibling atopic status, disease severity or quality of life. Physicians should advise parents that there is no direct correlation between AE severity, quality of life, asthma or allergic rhinitis with cutaneous sensitization to cats or dogs. Sensitized patients especially those with concomitant asthma and severe symptoms may consider non-furry alternatives if they plan to have a pet. Highly sensitized

  9. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  10. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  11. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  12. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  13. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  14. Testing environment shape differentially modulates baseline and nicotine-induced changes in behavior: Sex differences, hypoactivity, and behavioral sensitization.

    Science.gov (United States)

    Illenberger, J M; Mactutus, C F; Booze, R M; Harrod, S B

    2018-02-01

    In those who use nicotine, the likelihood of dependence, negative health consequences, and failed treatment outcomes differ as a function of gender. Women may be more sensitive to learning processes driven by repeated nicotine exposure that influence conditioned approach and craving. Sex differences in nicotine's influence over overt behaviors (i.e. hypoactivity or behavioral sensitization) can be examined using passive drug administration models in male and female rats. Following repeated intravenous (IV) nicotine injections, behavioral sensitization is enhanced in female rats compared to males. Nonetheless, characteristics of the testing environment also mediate rodent behavior following drug administration. The current experiment used a within-subjects design to determine if nicotine-induced changes in horizontal activity, center entries, and rearing displayed by male and female rats is detected when behavior was recorded in round vs. square chambers. Behaviors were recorded from each group (males-round: n=19; males-square: n=18; females-square: n=19; and females-round: n=19) immediately following IV injection of saline, acute nicotine, and repeated nicotine (0.05mg/kg/injection). Prior to nicotine treatment, sex differences were apparent only in round chambers. Following nicotine administration, the order of magnitude for the chamber that provided enhanced detection of hypoactivity or sensitization was contingent upon both the dependent measure under examination and the animal's biological sex. As such, round and square testing chambers provide different, and sometimes contradictory, accounts of how male and female rats respond to nicotine treatment. It is possible that a central mechanism such as stress or cue sensitivity is impacted by both drug exposure and environment to drive the sex differences observed in the current experiment. Until these complex relations are better understood, experiments considering sex differences in drug responses should balance

  15. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  16. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  17. Reliability and sensitivity to change of the timed standing balance test in children with down syndrome

    Directory of Open Access Journals (Sweden)

    Vencita Priyanka Aranha

    2016-01-01

    Full Text Available Objective: To estimate the reliability and sensitivity to change of the timed standing balance test in children with Down syndrome (DS. Methods: It was a nonblinded, comparison study with a convenience sample of subjects consisting of children with DS (n = 9 aged 8–17 years. The main outcome measure was standing balance which was assessed using timed standing balance test, the time required to maintain in four conditions, eyes open static, eyes closed static, eyes open dynamic, and eyes closed dynamic. Results: Relative reliability was excellent for all four conditions with an Interclass Correlation Coefficient (ICC ranging from 0.91 to 0.93. The variation between repeated measurements for each condition was minimal with standard error of measurement (SEM of 0.21–0.59 s, suggestive of excellent absolute reliability. The sensitivity to change as measured by smallest real change (SRC was 1.27 s for eyes open static, 1.63 s for eyes closed static, 0.58 s for eyes open dynamic, and 0.61 s for eyes closed static. Conclusions: Timed standing balance test is an easy to administer test and sensitive to change with strong absolute and relative reliabilities, an important first step in establishing its utility as a clinical balance measure in children with DS.

  18. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  19. Is the standard model really tested?

    International Nuclear Information System (INIS)

    Takasugi, E.

    1989-01-01

    It is discussed how the standard model is really tested. Among various tests, I concentrate on the CP violation phenomena in K and B meson system. Especially, the resent hope to overcome the theoretical uncertainty in the evaluation on the CP violation of K meson system is discussed. (author)

  20. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    Science.gov (United States)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  1. Sensitivity of the urban airshed model to mixing height profiles

    Energy Technology Data Exchange (ETDEWEB)

    Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W. [New York State Dept. of Environmental Conservation, Albany, NY (United States)

    1994-12-31

    The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.

  2. Model to Test Electric Field Comparisons in a Composite Fairing Cavity

    Science.gov (United States)

    Trout, Dawn H.; Burford, Janessa

    2013-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.

  3. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  4. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  5. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    Science.gov (United States)

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  6. Adaptive Management Plan for Sensitive Plant Species on the Nevada Test Site

    International Nuclear Information System (INIS)

    Wills, C. A.

    2001-01-01

    The Nevada Test Site supports numerous plant species considered sensitive because of their past or present status under the Endangered Species Act and with federal and state agencies. In 1998, the U.S. Department of Energy, Nevada Operation Office (DOE/NV) prepared a Resource Management Plan which commits to protects and conserve these sensitive plant species and to minimize accumulative impacts to them. This document presents the procedures of a long-term adaptive management plan which is meant to ensure that these goals are met. It identifies the parameters that are measured for all sensitive plant populations during long-term monitoring and the adaptive management actions which may be taken if significant threats to these populations are detected. This plan does not, however, identify the current list of sensitive plant species know to occur on the Nevada Test Site. The current species list and progress on their monitoring is reported annually by DOE/NV in the Resource Management Plan

  7. Evaluation of multiplex polymerase chain reaction as an alternative to conventional antibiotic sensitivity test

    Directory of Open Access Journals (Sweden)

    K. Rathore

    2018-04-01

    Full Text Available Aim: This study was designed to evaluate the potential of the use of multiplex polymerase chain reaction (PCR as an alternative to conventional antibiotic sensitivity test. Materials and Methods: Isolates of Staphylococcus aureus (total = 36 from clinical cases presented to Teaching Veterinary Clinical Complex of College of Veterinary and Animal Sciences (CVAS, Navania, Udaipur, were characterized by morphological, cultural, and biochemical methods. Then, the isolates were further subjected to molecular characterization by PCR targeting S. aureus-specific sequence (107 bp. Phenotypic antibiotic sensitivity pattern was analyzed by Kirby Bauer disc diffusion method against 11 commonly used antibiotics in veterinary medicine in and around Udaipur region. The genotypic antibiotic sensitivity pattern was studied against methicillin, aminoglycosides, and tetracycline targeting the gene mecA, aacA-aphD, and tetK by multiplex PCR. Results: There was 100% correlation between the phenotype and genotype of aminoglycoside resistance, more than 90% correlation for methicillin resistance, and 58.3% in the case tetracycline resistance. Conclusion: As there is a good correlation between phenotype and genotype of antibiotic resistance, multiplex PCR can be used as an alternative to the conventional antibiotic susceptibility testing, as it can give a rapid and true prediction of antibiotic sensitivity pattern.

  8. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  9. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  10. Sensitivity and validity of psychometric tests for assessing driving impairment: effects of sleep deprivation.

    Science.gov (United States)

    Jongen, Stefan; Perrier, Joy; Vuurman, Eric F; Ramaekers, Johannes G; Vermeeren, Annemiek

    2015-01-01

    To assess drug induced driving impairment, initial screening is needed. However, no consensus has been reached about which initial screening tools have to be used. The present study aims to determine the ability of a battery of psychometric tests to detect performance impairing effects of clinically relevant levels of drowsiness as induced by one night of sleep deprivation. Twenty four healthy volunteers participated in a 2-period crossover study in which the highway driving test was conducted twice: once after normal sleep and once after one night of sleep deprivation. The psychometric tests were conducted on 4 occasions: once after normal sleep (at 11 am) and three times during a single night of sleep deprivation (at 1 am, 5 am, and 11 am). On-the-road driving performance was significantly impaired after sleep deprivation, as measured by an increase in Standard Deviation of Lateral Position (SDLP) of 3.1 cm compared to performance after a normal night of sleep. At 5 am, performance in most psychometric tests showed significant impairment. As expected, largest effect sizes were found on performance in the Psychomotor Vigilance Test (PVT). Large effects sizes were also found in the Divided Attention Test (DAT), the Attention Network Test (ANT), and the test for Useful Field of View (UFOV) at 5 and 11 am during sleep deprivation. Effects of sleep deprivation on SDLP correlated significantly with performance changes in the PVT and the DAT, but not with performance changes in the UFOV. From the psychometric tests used in this study, the PVT and DAT seem most promising for initial evaluation of drug impairment based on sensitivity and correlations with driving impairment. Further studies are needed to assess the sensitivity and validity of these psychometric tests after benchmark sedative drug use.

  11. Sensitivity and validity of psychometric tests for assessing driving impairment: effects of sleep deprivation.

    Directory of Open Access Journals (Sweden)

    Stefan Jongen

    Full Text Available To assess drug induced driving impairment, initial screening is needed. However, no consensus has been reached about which initial screening tools have to be used. The present study aims to determine the ability of a battery of psychometric tests to detect performance impairing effects of clinically relevant levels of drowsiness as induced by one night of sleep deprivation.Twenty four healthy volunteers participated in a 2-period crossover study in which the highway driving test was conducted twice: once after normal sleep and once after one night of sleep deprivation. The psychometric tests were conducted on 4 occasions: once after normal sleep (at 11 am and three times during a single night of sleep deprivation (at 1 am, 5 am, and 11 am.On-the-road driving performance was significantly impaired after sleep deprivation, as measured by an increase in Standard Deviation of Lateral Position (SDLP of 3.1 cm compared to performance after a normal night of sleep. At 5 am, performance in most psychometric tests showed significant impairment. As expected, largest effect sizes were found on performance in the Psychomotor Vigilance Test (PVT. Large effects sizes were also found in the Divided Attention Test (DAT, the Attention Network Test (ANT, and the test for Useful Field of View (UFOV at 5 and 11 am during sleep deprivation. Effects of sleep deprivation on SDLP correlated significantly with performance changes in the PVT and the DAT, but not with performance changes in the UFOV.From the psychometric tests used in this study, the PVT and DAT seem most promising for initial evaluation of drug impairment based on sensitivity and correlations with driving impairment. Further studies are needed to assess the sensitivity and validity of these psychometric tests after benchmark sedative drug use.

  12. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  13. A model to estimate insulin sensitivity in dairy cows

    OpenAIRE

    Holtenius, Paul; Holtenius, Kjell

    2007-01-01

    Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an i...

  14. Touch-sensitive colour graphics enhance monitoring of loss-of-coolant accident tests

    International Nuclear Information System (INIS)

    Snedden, M.D.; Mead, G.L.

    1982-01-01

    A stand-alone computer-based system with an intelligent colour termimal is described for monitoring parameters during loss-of-coolant accident tests. Colour graphic displays and touch-sensitive control have been combined for effective operator interaction. Data collected by the host MODCOMP II minicomputer are dynamically updated on colour pictures generated by the terminal. Experimenters select system functions by touching simulated switches on a transparent touch-sensitive overlay, mounted directly over the face of the colour screen, eliminating the need for a keyboard. Switch labels and colours are changed on the screen by the terminal software as different functions are selected. Interaction is self-prompting and can be learned quickly. System operation for a complete set of 20 tests has demonstrated the convenience of interactive touchsensitive colour graphics

  15. Analysis of clonogenic human brain tumour cells: preliminary results of tumour sensitivity testing with BCNU

    Energy Technology Data Exchange (ETDEWEB)

    Rosenblum, M L; Dougherty, D A; Deen, D F; Hoshino, T; Wilson, C B [California Univ., San Francisco (USA). Dept. of Neurology

    1980-04-01

    Biopsies from 6 patients with glioblastoma multiforme were disaggregated and single cells were treated in vitro with various concentrations of 1,3-bis(2-chloroethyl)-1-nitroso urea (BCNU) and plated for cell survival. One patient's cells were sensitive to BCNU in vitro; after a single dose of BCNU her brain scan reverted to normal and she was clinically well. Five tumours demonstrated resistance in vitro. Three of these tumours progressed during the first course of chemotherapy with a nitrosourea and the patients died at 21/2, 4 and 81/2 months after operation. Two patients who showed dramatic responses to radiation therapy were considered unchanged after the first course of nitrosourea therapy (although one demonstrated tumour enlargement on brain scan). The correlation of in vitro testing of tumour cell sensitivity with actual patient response is encouraging enough to warrant further work to determine whether such tests should weigh in decisions on patient therapy.

  16. Validation of different measures of insulin sensitivity of glucose metabolism in dairy cows using the hyperinsulinemic euglycemic clamp test as the gold standard.

    Science.gov (United States)

    De Koster, J; Hostens, M; Hermans, K; Van den Broeck, W; Opsomer, G

    2016-10-01

    The aim of the present research was to compare different measures of insulin sensitivity in dairy cows at the end of the dry period. To do so, 10 clinically healthy dairy cows with a varying body condition score were selected. By performing hyperinsulinemic euglycemic clamp (HEC) tests, we previously demonstrated a negative association between the insulin sensitivity and insulin responsiveness of glucose metabolism and the body condition score of these animals. In the same animals, other measures of insulin sensitivity were determined and the correlation with the HEC test, which is considered as the gold standard, was calculated. Measures derived from the intravenous glucose tolerance test (IVGTT) are based on the disappearance of glucose after an intravenous glucose bolus. Glucose concentrations during the IVGTT were used to calculate the area under the curve of glucose and the clearance rate of glucose. In addition, glucose and insulin data from the IVGTT were fitted in the minimal model to derive the insulin sensitivity parameter, Si. Based on blood samples taken before the start of the IVGTT, basal concentrations of glucose, insulin, NEFA, and β-hydroxybutyrate were determined and used to calculate surrogate indices for insulin sensitivity, such as the homeostasis model of insulin resistance, the quantitative insulin sensitivity check index, the revised quantitative insulin sensitivity check index and the revised quantitative insulin sensitivity check index including β-hydroxybutyrate. Correlation analysis revealed no association between the results obtained by the HEC test and any of the surrogate indices for insulin sensitivity. For the measures derived from the IVGTT, the area under the curve for the first 60 min of the test and the Si derived from the minimal model demonstrated good correlation with the gold standard. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  18. In silico modeling predicts drug sensitivity of patient-derived cancer cells.

    Science.gov (United States)

    Pingle, Sandeep C; Sultana, Zeba; Pastorino, Sandra; Jiang, Pengfei; Mukthavaram, Rajesh; Chao, Ying; Bharati, Ila Sri; Nomura, Natsuko; Makale, Milan; Abbasi, Taher; Kapoor, Shweta; Kumar, Ansu; Usmani, Shahabuddin; Agrawal, Ashish; Vali, Shireen; Kesari, Santosh

    2014-05-21

    Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling ("omics") data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.

  19. Sensitivity of the polypropylene to the strain rate: experiments and modeling

    International Nuclear Information System (INIS)

    Abdul-Latif, A.; Aboura, Z.; Mosleh, L.

    2002-01-01

    Full text.The main goal of this work is first to evaluate experimentally the strain rate dependent deformation of the polypropylene under tensile load; and secondly is to propose a model capable to appropriately describe the mechanical behavior of this material and especially its sensitivity to the strain rate. Several experimental tensile tests are performed at different quasi-static strain rates in the range of 10 -5 s -1 to 10 -1 s -1 . In addition to some relaxation tests are also conducted introducing the strain rate jumping state during testing. Within the framework of elastoviscoplasticity, a phenomenological model is developed for describing the non-linear mechanical behavior of the material under uniaxial loading paths. With the small strain assumption, the sensitivity of the polypropylene to the strain rate being of particular interest in this work, is accordingly taken into account. As a matter of fact, since this model is based on internal state variables, we assume thus that the material sensitivity to the strain rate is governed by the kinematic hardening variable notably its modulus and the accumulated viscoplastic strain. As far as the elastic behavior is concerned, it is noticed that such a behavior is slightly influenced by the employed strain rate rage. For this reason, the elastic behavior is classically determined, i.e. without coupling with the strain rate dependent deformation. It is obvious that the inelastic behavior of the used material is thoroughly dictated by the applied strain rate. Hence, the model parameters are well calibrated utilizing several experimental databases for different strain rates (10 -5 s -1 to 10 -1 s -1 ). Actually, among these experimental results, some experiments related to the relaxation phenomenon and strain rate jumping during testing (increasing or decreasing) are also used in order to more perfect the model parameters identification. To validate the calibrated model parameters, simulation tests are achieved

  20. Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited

    Science.gov (United States)

    Ghergut, Julia; Behrens, Horst; Sauter, Martin

    2017-04-01

    Single-well injection-withdrawal (SWIW) or 'push-then-pull' tracer methods appear attractive for a number of reasons: less uncertainty on design and dimensioning, and lower tracer quantities required than for inter-well tests; stronger tracer signals, enabling easier and cheaper metering, and shorter metering duration required, reaching higher tracer mass recovery than in inter-well tests; last not least: no need for a second well. However, SWIW tracer signal inversion faces a major issue: the 'push-then-pull' design weakens the correlation between tracer residence times and georeservoir transport parameters, inducing insensitivity or ambiguity of tracer signal inversion w. r. to some of those georeservoir parameters that are supposed to be the target of tracer tests par excellence: pore velocity, transport-effective porosity, fracture or fissure aperture and spacing or density (where applicable), fluid/solid or fluid/fluid phase interface density. Hydraulic methods cannot measure the transport-effective values of such parameters, because pressure signals correlate neither with fluid motion, nor with material fluxes through (fluid-rock, or fluid-fluid) phase interfaces. The notorious ambiguity impeding parameter inversion from SWIW test signals has nourished several 'modeling attitudes': (i) regard dispersion as the key process encompassing whatever superposition of underlying transport phenomena, and seek a statistical description of flow-path collectives enabling to characterize dispersion independently of any other transport parameter, as proposed by Gouze et al. (2008), with Hansen et al. (2016) offering a comprehensive analysis of the various ways dispersion model assumptions interfere with parameter inversion from SWIW tests; (ii) regard diffusion as the key process, and seek for a large-time, asymptotically advection-independent regime in the measured tracer signals (Haggerty et al. 2001), enabling a dispersion-independent characterization of multiple

  1. Food Sensitivity in Children with Acute Urticaria in Skin Prick Test: Single Center Experience

    Directory of Open Access Journals (Sweden)

    Hatice Eke Gungor

    2015-11-01

    Full Text Available Aim: Families of children with acute urticaria often think that there is food allergy in children with urticaria and insist for skin tests. In this study, it was aimed to determine whether skin prick tests are necessary in cases presented with acute urticaria, in whom other causes of acute urticaria are excluded. Material and Method: A test panel involving cow milk, egg white, wheat, hazelnut, peanut, soybean, walnut, sesame, and tuna fish antigens was applied to the children presented with acute urticaria between 1 August 2013 and 1 August 2014, in whom other causes of acute urticaria were excluded and suspected food allergy was reported by parents. Results: Overall, 574 children aged 1-14 years were included to the study. Of the patients, sensitization against at least one food antigen was detected in 22.3% (128/574 of the patients. This rate was found to be 31.9% among those younger than 3 years, while 19.3% in those older than 3 years. Overall, sensitization rates against food allergen in panel were as follows: egg white, 7.3%; wheat, 3.3%; cow milk, 2.7%,; sesame, 2.8%; hazelnut, 2.4%; soybean, 2.3%; peanut, 1.9%, walnut, 1.6%; tuna fish, 1.6%. In general, the history of patients wasn%u2019t compatible with food sensitization detected. Discussion: Sensitization to food allergens is infrequent in children presented with acute urticaria, particularly among those older than 3 years despite expressions of parent and skin prick tests seems to be unnecessary unless strongly suggestive history is present.

  2. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Orientation sensitive deformation in Zr alloys: experimental and modeling studies

    International Nuclear Information System (INIS)

    Srivastava, D.; Keskar, N.; Manikrishna, K.V.; Dey, G.K.; Jha, S.K.; Saibaba, N.

    2016-01-01

    Zirconium alloys are used for fuel cladding and other structural components in pressurised heavy water nuclear reactors (PHWR's). Currently there is a lot of interest in developing alloys for structural components for higher temperature reactor operation. There is also need for development of cladding material with better corrosion and mechanical property of cladding material for higher and extended burn up applications. The performance of the cladding material is primarily influenced by the microstructural features of the material such as constituent phases their morphology, precipitates characteristics, nature of defects etc. Therefore, the microstructure is tailored as per the performance requirement by through controlled additions of alloying elements, thermo-mechanical- treatments. In order to obtain the desired microstructure, it is important to know the deformation behaviour of the material. Orientation dependent deformation behavior was studied in Zr using a combination of experimental and modeling (both discrete and atomistic dislocation dynamics) methods. Under the conditions of plane strain deformation, it was observed that single phase Zr, had significant extent of deformation heterogeneity based on local orientations. Discrete dislocation dynamics simulations incorporating multi slip systems had captured the orientation sensitive deformation. MD dislocations on the other hand brought the fundamental difference in various crystallographic orientations in determining the nucleating stress for the dislocations. The deformed structure has been characterized using X-ray, electron and neutron diffraction techniques. The various operating deformation mechanism will be discussed in this presentation. (author)

  4. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    Energy Technology Data Exchange (ETDEWEB)

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  5. A rapid, sensitive and reliable diagnostic test for scrub typhus in China

    Directory of Open Access Journals (Sweden)

    Zhang Lijuan

    2011-01-01

    Full Text Available Purpose: To evaluate the performances for detection of IgM and IgG antibodies to Orientia. tsutsugamushi (Ot using a gold conjugate-based rapid diagnostic test (RDT. Materials and Methods: The RDT employing mixture recombinant 56-kDa proteins of O. tsutsugamushi and the mIFA assay was performed on 33 patients from Fujian and Yunnan province respectively and 94 positive sera (36 from Hainan province and 58 from Jiangsu province from convalescent stages of the patients with scrub typhus respectively and 82 negative sera from healthy farmers from Anhui province and Beijing City respectively in 2009. A comparison of the RDT and mIFA assay was performed by using the c2 test and the P level of ≤0.05 was considered to be significant. Results: Among these 94 positive sera from convalescent stages of the illness and 82 sera from control farmers, the specificity of RDT was 100% for both IgM and IgG tests. In 33 cases with scrub typhus, 5 cases were positively detected earlier by RDT than by mIFA for the IgM test, and 2 cases were positive for the IgG test. The sensitivities of RDT were 93.9% and 90.9% for IgM and IgG, respectively. Considering IgM and IgG together, the sensitivity was 100%. The geometric mean titre (GMT of IFA and the RDT assay in diluted sera from confirmed cases were 1:37 versus 1:113 respectively (P<0.001 for IgM test and 1:99 versus 1:279 respectively (P<0.016 for IgG. Conclusions: The RDT was more sensitive than the traditional IFA for the early diagnosis of scrub typhus and was particularly suitable for use in rural areas.

  6. Sex and smoking sensitive model of radon induced lung cancer

    International Nuclear Information System (INIS)

    Zhukovsky, M.; Yarmoshenko, I.

    2006-01-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  7. Sex and smoking sensitive model of radon induced lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhukovsky, M.; Yarmoshenko, I. [Institute of Industrial Ecology of Ural Branch of Russian Academy of Sciences, Yekaterinburg (Russian Federation)

    2006-07-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  8. Pain sensitivity of children with Down syndrome and their siblings: quantitative sensory testing versus parental reports.

    Science.gov (United States)

    Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique

    2015-11-01

    The aim of this study was to compare thermal detection and pain thresholds in children with Down syndrome with those of their siblings. Sensory detection and pain thresholds were assessed in children with Down syndrome and their siblings using quantitative testing methods. Parental questionnaires addressing developmental age, pain coping, pain behaviour, and chronic pain were also utilized. Forty-two children with Down syndrome (mean age 12y 10mo) and 24 siblings (mean age 15y) participated in this observational study. The different sensory tests proved feasible in 13 to 29 (33-88%) of the children with Down syndrome. These children were less sensitive to cold and warmth than their siblings, but only when measured with a reaction time-dependent method, and not with a reaction time-independent method. Children with Down syndrome were more sensitive to heat pain, and only 6 (14%) of them were able to adequately self-report pain, compared with 22 (92%) of siblings (pChildren with Down syndrome will remain dependent on pain assessment by proxy, since self-reporting is not adequate. Parents believe that their children with Down syndrome are less sensitive to pain than their siblings, but this was not confirmed by quantitative sensory testing. © 2015 Mac Keith Press.

  9. Strain rate sensitivity of the tensile strength of two silicon carbides: experimental evidence and micromechanical modelling.

    Science.gov (United States)

    Zinszner, Jean-Luc; Erzar, Benjamin; Forquin, Pascal

    2017-01-28

    Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 10 3 to 10 4  s -1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual-Forquin-Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones.This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'. © 2016 The Author(s).

  10. Strain rate sensitivity of the tensile strength of two silicon carbides: experimental evidence and micromechanical modelling

    Science.gov (United States)

    Erzar, Benjamin

    2017-01-01

    Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 103 to 104 s−1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual–Forquin–Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956504

  11. Strain rate sensitivity of the tensile strength of two silicon carbides: experimental evidence and micromechanical modelling

    Science.gov (United States)

    Zinszner, Jean-Luc; Erzar, Benjamin; Forquin, Pascal

    2017-01-01

    Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 103 to 104 s-1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual-Forquin-Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  12. Multivariate Models for Prediction of Skin Sensitization Hazard in Humans

    Science.gov (United States)

    One of ICCVAM’s highest priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary for a substance to elicit a skin sensitization reaction suggests that no single alternative me...

  13. Laboratory measurements and model sensitivity studies of dust deposition ice nucleation

    Directory of Open Access Journals (Sweden)

    G. Kulkarni

    2012-08-01

    Full Text Available We investigated the ice nucleating properties of mineral dust particles to understand the sensitivity of simulated cloud properties to two different representations of contact angle in the Classical Nucleation Theory (CNT. These contact angle representations are based on two sets of laboratory deposition ice nucleation measurements: Arizona Test Dust (ATD particles of 100, 300 and 500 nm sizes were tested at three different temperatures (−25, −30 and −35 °C, and 400 nm ATD and kaolinite dust species were tested at two different temperatures (−30 and −35 °C. These measurements were used to derive the onset relative humidity with respect to ice (RHice required to activate 1% of dust particles as ice nuclei, from which the onset single contact angles were then calculated based on CNT. For the probability density function (PDF representation, parameters of the log-normal contact angle distribution were determined by fitting CNT-predicted activated fraction to the measurements at different RHice. Results show that onset single contact angles vary from ~18 to 24 degrees, while the PDF parameters are sensitive to the measurement conditions (i.e. temperature and dust size. Cloud modeling simulations were performed to understand the sensitivity of cloud properties (i.e. ice number concentration, ice water content, and cloud initiation times to the representation of contact angle and PDF distribution parameters. The model simulations show that cloud properties are sensitive to onset single contact angles and PDF distribution parameters. The comparison of our experimental results with other studies shows that under similar measurement conditions the onset single contact angles are consistent within ±2.0 degrees, while our derived PDF parameters have larger discrepancies.

  14. Sensitivity test of tumor cell to anticancer drug using diffusion chamber

    Energy Technology Data Exchange (ETDEWEB)

    Soejima, S [Hirosaki Univ., Aomori (Japan). School of Medicine

    1978-11-01

    The diffusion chamber method and xenogeneic transplantation of human cancer cells in rats were studied clinically to test the sensitivity of these cells to anticancer drugs. The growth of Hirosaki sarcoma in a diffusion chamber inserted in to Wistar rats was influenced by the difference in tumor cell counts in the chamber. The growth rate in the chamber inserted in to the subcutaneous tissue was more constant than in the abdominal cavity, but the degree of proliferation of tumor cells in the abdominal cavity was more than in the subcutaneous tissue. Sarcoma and solid type sarcoma were affected by mitomycin C (MMC). The effect was greater in dd-mice than in Donryu rats. Solid type Yoshida sarcoma inserted in to the subcutaneous tissue of Donryu rat was not affected by MMC. The degree of sensitivity of methylcholanthrene induced tumor cells, inserted in to the subcutaneous tissue of Donryu rats, to MMC differed according to various conditions of the hosts. Clinically, the influences of anticancer drugs on human cancer cells inserted in to the subcutaneous tissue of /sup 60/Co-irradiated Donryu rats were observed. There were various grades of sensitivity of gastric cancer cells to anticancer drugs. MMC was effective in 53% of the cases, Cyclophosphamide in 40%, 5-FU in 54%, cytosine arabinoside in 32%, and FT-207 in 57%. Twenty-seven percent were not affected by anticancer drugs. On histological examination, tubular adenocarcinoma cells had a high sensitivity to anticancer drugs, while poorly differentiated adenocarcinoma cells had a low sensitive. Anticancer drugs selected according to the sensitivity of human cancer cells had a marked effective on advanced cancer cells. The diffusion chamber method was useful in determining the degree of bone marrow toxicity of anticancer drugs.

  15. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)

    1987-01-01

    Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references

  16. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations

  17. Kinematic tests of exotic flat cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations.

  18. Validation of an iPad test of letter contrast sensitivity.

    Science.gov (United States)

    Kollbaum, Pete S; Jansen, Meredith E; Kollbaum, Elli J; Bullimore, Mark A

    2014-03-01

    An iPad-based letter contrast sensitivity test was developed (ridgevue.com) consisting of two letters on each page of an iBook. The contrast decreases from 80% (logCS = 0.1) to 0.5% (logCS = 2.3) by 0.1 log units per page. The test was compared to the Pelli-Robson Test and the Freiburg Acuity and Contrast Test. Twenty normally sighted subjects and 20 low-vision subjects were tested monocularly at 1 m using each test wearing their habitual correction. After a 5-minute break, subjects were retested with each test in reverse order. Two different letter charts were used for both the Pelli-Robson and iPad tests, and the order of testing was varied systematically. For the Freiburg test, the target was a variable contrast Landolt C presented at eight possible orientations and used a 30-trial Best PEST procedure. Repeatability and agreement were assessed by determining the 95% limits of agreement (LoA) ± 1.96 SD of the differences between administrations or tests. All three tests showed good repeatability in terms of the 95% LoA: iPad = ± 0.19, Pelli-Robson = ± 0.19, and Freiburg = ± 0.15. The iPad test showed good agreement with the Freiburg test with similar mean (± SD) logCS (iPad = 1.98 ± 0.11, Freiburg = 1.96 ± 0.06) and with narrow 95% LoA (± 0.24), but the Pelli-Robson test gave significantly lower values (1.65 ± 0.04). Low-vision subjects had slightly poorer repeatability (iPad = ± 0.24, Pelli-Robson = ± 0.23, Freiburg = ± 0.21). Agreement between the iPad and Freiburg tests was good (iPad = 1.45 ± 0.40, Freiburg = 1.54 ± 0.37), but the Pelli-Robson test gave significantly lower values (1.30 ± 0.30). The iPad test showed similar repeatability and may be a rapid and convenient alternative to some existing measures. The Pelli-Robson test gave lower values than the other tests.

  19. Two-Dimensional Modeling of Heat and Moisture Dynamics in Swedish Roads: Model Set up and Parameter Sensitivity

    Science.gov (United States)

    Rasul, H.; Wu, M.; Olofsson, B.

    2017-12-01

    Modelling moisture and heat changes in road layers is very important to understand road hydrology and for better construction and maintenance of roads in a sustainable manner. In cold regions due to the freezing/thawing process in the partially saturated material of roads, the modeling task will become more complicated than simple model of flow through porous media without freezing/thawing pores considerations. This study is presenting a 2-D model simulation for a section of highway with considering freezing/thawing and vapor changes. Partial deferential equations (PDEs) are used in formulation of the model. Parameters are optimized from modelling results based on the measured data from test station on E18 highway near Stockholm. Impacts of phase change considerations in the modelling are assessed by comparing the modeled soil moisture with TDR-measured data. The results show that the model can be used for prediction of water and ice content in different layers of the road and at different seasons. Parameter sensitivities are analyzed by implementing a calibration strategy. In addition, the phase change consideration is evaluated in the modeling process, by comparing the PDE model with another model without considerations of freezing/thawing in roads. The PDE model shows high potential in understanding the moisture dynamics in the road system.

  20. Verification of intraspecimen method using constant stress tension test of sensitized alloy 600

    International Nuclear Information System (INIS)

    Lee, Seung Ki; Choi, Hoi Su; Hwang, Il Soon

    2005-01-01

    Stress corrosion cracking (SCC) occurring at the Nibase alloy 600 used in the nuclear power plant SG tubes and CRDM penetration nozzles had been reported after long-term operation in the harsh environment. Intraspecimen method was developed to predict the SCC initiation time statistically. [1] By dividing a test area into a number of smaller regions (intraspecimens) having homogeneous physical and chemical condition each SCC initiation in each intraspecimen could be counted as an independent outcome to provide enough number of statistical data. Earlier work of intraspecimen method had many problems in test method and didn't agree with Weibull statistics which is the theoretical base of intraspecimen method. The test method is improved in this intraspecimen test. To find out the root causes of the problems in earlier work and improve the accuracy of intraspecimen method, two kinds of materials are introduced, which are different in grain size but same in chemical composition. Ni-base alloy 600, heat no. J313 and J323 are used as test materials. Specimens of sensitized Alloy 600 are tested under the condition of constant tensile stress and well defined chemical environment therefore we can easily observe typical intergranular stress corrosion cracking (IGSCC). Material with finer grain (J323) showed the areadependence in agreement with theoretical prediction. But material with coarser grain (J313) did not show any significant area-dependence. While SCC initiates earlier at grain boundaries that are oriented close to normal to the stress axis, crack initiation time showed no correlation with grain boundary misorientation estimated by Electron Back Scattered Diffraction (EBSD). From the SCC initiation tests with two test materials, it is concluded that the number of grains in an intraspecimen, degree of sensitization and uniform stress distribution are important parameters to meet Weibull statistics

  1. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  2. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  3. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  4. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...... and implementing abstractions will improve the applicability of model checking in practice....

  5. Parameter sensitivity study of a Field II multilayer transducer model on a convex transducer

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2009-01-01

    A multilayer transducer model for predicting a transducer impulse response has in earlier works been developed and combined with the Field II software. This development was tested on current, voltage, and intensity measurements on piezoceramics discs (Bæk et al. IUS 2008) and a convex 128 element...... ultrasound imaging transducer (Bæk et al. ICU 2009). The model benefits from its 1D simplicity and hasshown to give an amplitude error around 1.7‐2 dB. However, any prediction of amplitude, phase, and attenuation of pulses relies on the accuracy of manufacturer supplied material characteristics, which may...... is a quantitative calibrated model for a complete ultrasound system. This includes a sensitivity study aspresented here.Statement of Contribution/MethodsThe study alters 35 different model parameters which describe a 128 element convex transducer from BK Medical Aps. The changes are within ±20 % of the values...

  6. The sensitivity and the specifity of rapid antigen test in streptococcal upper respiratory tract infections.

    Science.gov (United States)

    Gurol, Yesim; Akan, Hulya; Izbirak, Guldal; Tekkanat, Zuhal Tazegun; Gunduz, Tehlile Silem; Hayran, Osman; Yilmaz, Gulden

    2010-06-01

    It is aimed to detect the sensitivity and specificity of rapid antigen detection of group A beta hemolytic streptococci from throat specimen compared with throat culture. The other goal of the study is to help in giving clinical decisions in upper respiratory tract infections according to the age group, by detection of sensitivity and positive predictive values of the rapid tests and throat cultures. Rapid antigen detection and throat culture results for group A beta hemolytic streptococci from outpatients attending to our university hospital between the first of November 2005 and 31st of December 2008 were evaluated retrospectively. Throat samples were obtained by swabs from the throat and transported in the Stuart medium and Quickvue Strep A [Quidel, San Diego, USA] cassette test was applied and for culture, specimen was inoculated on 5% blood sheep agar and identified according to bacitracin and trimethoprim-sulphametaxazole susceptibility from beta hemolytic colonies. During the dates between the first of November 2005 and 31st of December 2008, from 453 patients both rapid antigen detection and throat culture were evaluated. Rapid antigen detection sensitivity and specificity were found to be 64.6% and 96.79%, respectively. The positive predictive value was 80.95% whereas negative predictive value was 92.82%. Kappa index was 0.91. When the results were evaluated according to the age groups, the sensitivity and the positive predictive value of rapid antigen detection in children were 70%, 90.3% and in adults 59.4%, 70.4%. When bacterial infection is concerned to prevent unnecessary antibiotic use, rapid streptococcal antigen test (RSAT) is a reliable method to begin immediate treatment. To get the maximum sensitivity of RSAT, the specimen collection technique used and education of the health care workers is important. While giving clinical decision, it must be taken into consideration that the sensitivity and the positive predictive value of the RSAT is quite

  7. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  8. Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2

    International Nuclear Information System (INIS)

    Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.

    1985-01-01

    This paper summarizes a demonstration uncertainty/sensitivity analysis performed on the reactor accident consequence model CRAC2. The study was performed with uncertainty/sensitivity analysis techniques compiled as part of the MELCOR program. The principal objectives of the study were: 1) to demonstrate the use of the uncertainty/sensitivity analysis techniques on a health and economic consequence model, 2) to test the computer models which implement the techniques, 3) to identify possible difficulties in performing such an analysis, and 4) to explore alternative means of analyzing, displaying, and describing the results. Demonstration of the applicability of the techniques was the motivation for performing this study; thus, the results should not be taken as a definitive uncertainty analysis of health and economic consequences. Nevertheless, significant insights on health and economic consequence analysis can be drawn from the results of this type of study. Latin hypercube sampling (LHS), a modified Monte Carlo technique, was used in this study. LHS generates a multivariate input structure in which all the variables of interest are varied simultaneously and desired correlations between variables are preserved. LHS has been shown to produce estimates of output distribution functions that are comparable with results of larger random samples

  9. Pile Model Tests Using Strain Gauge Technology

    Science.gov (United States)

    Krasiński, Adam; Kusio, Tomasz

    2015-09-01

    Ordinary pile bearing capacity tests are usually carried out to determine the relationship between load and displacement of pile head. The measurement system required in such tests consists of force transducer and three or four displacement gauges. The whole system is installed at the pile head above the ground level. This approach, however, does not give us complete information about the pile-soil interaction. We can only determine the total bearing capacity of the pile, without the knowledge of its distribution into the shaft and base resistances. Much more information can be obtained by carrying out a test of instrumented pile equipped with a system for measuring the distribution of axial force along its core. In the case of pile model tests the use of such measurement is difficult due to small scale of the model. To find a suitable solution for axial force measurement, which could be applied to small scale model piles, we had to take into account the following requirements: - a linear and stable relationship between measured and physical values, - the force measurement accuracy of about 0.1 kN, - the range of measured forces up to 30 kN, - resistance of measuring gauges against aggressive counteraction of concrete mortar and against moisture, - insensitivity to pile bending, - economical factor. These requirements can be fulfilled by strain gauge sensors if an appropriate methodology is used for test preparation (Hoffmann [1]). In this paper, we focus on some aspects of the application of strain gauge sensors for model pile tests. The efficiency of the method is proved on the examples of static load tests carried out on SDP model piles acting as single piles and in a group.

  10. Non-animal assessment of skin sensitization hazard: Is an integrated testing strategy needed, and if so what should be integrated?

    Science.gov (United States)

    Roberts, David W; Patlewicz, Grace

    2018-01-01

    There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Accuracy tests of the tessellated SLBM model

    International Nuclear Information System (INIS)

    Ramirez, A L; Myers, S C

    2007-01-01

    We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are

  12. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  13. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01