WorldWideScience

Sample records for model sensitivity tests

  1. Smart licensing and environmental flows: Modeling framework and sensitivity testing

    Science.gov (United States)

    Wilby, R. L.; Fenn, C. R.; Wood, P. J.; Timlett, R.; Lequesne, T.

    2011-12-01

    Adapting to climate change is just one among many challenges facing river managers. The response will involve balancing the long-term water demands of society with the changing needs of the environment in sustainable and cost effective ways. This paper describes a modeling framework for evaluating the sensitivity of low river flows to different configurations of abstraction licensing under both historical climate variability and expected climate change. A rainfall-runoff model is used to quantify trade-offs among environmental flow (e-flow) requirements, potential surface and groundwater abstraction volumes, and the frequency of harmful low-flow conditions. Using the River Itchen in southern England as a case study it is shown that the abstraction volume is more sensitive to uncertainty in the regional climate change projection than to the e-flow target. It is also found that "smarter" licensing arrangements (involving a mix of hands off flows and "rising block" abstraction rules) could achieve e-flow targets more frequently than conventional seasonal abstraction limits, with only modest reductions in average annual yield, even under a hotter, drier climate change scenario.

  2. NEW ANTIMICROBIAL SENSITIVITY TESTS OF BIOFILM OF STREPTOCOCCUS MUTANS IN ARTIFICIAL MOUTH MODEL

    Institute of Scientific and Technical Information of China (English)

    李鸣宇; 汪俊; 刘正; 朱彩莲

    2004-01-01

    Objective To develop a new antimicrobial sensitivity test model for oral products in vitro.Methods A biofilm artificial mouth model for antimicrobial sensitivity tests was established by modifying the LKI chromatography chamber. Using sodium fluoride and Tea polyphenol as antimicrobial agent and Streptococcus mutans as target, sensitivity tests were studied. Results The modeling biofilm assay resulted in a MIC of 1.28mg/ml for fluoride against S. mutans, which was 32 times the MIC for broth maco-dilution method. The differential resistance of bacteria bioflim to antimicrobial agent relative to planktonic cells was also demonstrated. Conclusion The biofilm artificial mouth model may be useful in oral products test.

  3. Test and Sensitivity Analysis of Hydrological Modeling in the Coupled WRF-Urban Modeling System

    Science.gov (United States)

    Wang, Z.; yang, J.

    2013-12-01

    Rapid urbanization has emerged as the source of many adverse effects that challenge the environmental sustainability of cities under changing climatic patterns. One essential key to address these challenges is to physically resolve the dynamics of urban-land-atmospheric interactions. To investigate the impact of urbanization on regional climate, physically-based single layer urban canopy model (SLUCM) has been developed and implemented into the Weather Research and Forecasting (WRF) platform. However, due to the lack of realistic representation of urban hydrological processes, simulation of urban climatology by current coupled WRF-SLUCM is inevitably inadequate. Aiming at improving the accuracy of simulations, recently we implemented urban hydrological processes into the model, including (1) anthropogenic latent heat, (2) urban irrigation, (3) evaporation over impervious surface, and (4) urban oasis effect. In addition, we couple the green roof system into the model to verify its capacity in alleviating urban heat island effect at regional scale. Driven by different meteorological forcings, offline tests show that the enhanced model is more accurate in predicting turbulent fluxes arising from built terrains. Though the coupled WRF-SLUCM has been extensively tested against various field measurement datasets, accurate input parameter space needs to be specified for good model performance. As realistic measurements of all input parameters to the modeling framework are rarely possible, understanding the model sensitivity to individual parameters is essential to determine the relative importance of parameter uncertainty to model performance. Thus we further use an advanced Monte Carlo approach to quantify relative sensitivity of input parameters of the hydrological model. In particular, performance of two widely used soil hydraulic models, namely the van Genuchten model (based on generic soil physics) and an empirical model (viz. the CHC model currently adopted in WRF

  4. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    Science.gov (United States)

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  5. Maternal sensitivity and language in early childhood: a test of the transactional model.

    Science.gov (United States)

    Leigh, Patricia; Nievar, M Angela; Nathans, Laura

    2011-08-01

    This study examined the relation between mothers' sensitive responsiveness to their children and the children's expressive language skills during early childhood. Reciprocal effects were tested with dyads of mothers and their children participating in the National Institute of Health and Human Development Study of Early Child Care and Youth Development. Sensitive maternal interactions positively affected children's later expressive language in the second and third years of life. Although maternal sensitivity predicted later language skills in children, children's language did not affect later maternal sensitivity as indicated in a structural equation model. These results do not support the 1975 transactional model of child development of Sameroff and Chandler. A consistent pattern of sensitivity throughout infancy and early childhood indicates the importance of fostering maternal sensitivity in infancy for prevention or remediation of expressive language problems in young children.

  6. Application of Unidimensional Item Response Models to Tests with Items Sensitive to Secondary Dimensions

    Science.gov (United States)

    Zhang, Bo

    2008-01-01

    In this research, the author addresses whether the application of unidimensional item response models provides valid interpretation of test results when administering items sensitive to multiple latent dimensions. Overall, the present study found that unidimensional models are quite robust to the violation of the unidimensionality assumption due…

  7. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  8. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2011-06-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources is still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model because there are large differences between simulated fractional inundation and satellite observations. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1, and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78 % of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. We expect this latter number may be an underestimate due to the low high-latitude inundated area captured by satellites and unrealistically low high-latitude productivity and soil carbon predicted by CLM4. Sensitivity analysis showed a large range (150–346 Tg CH4 yr−1 in

  9. Results From a Pressure Sensitive Paint Test Conducted at the National Transonic Facility on Test 197: The Common Research Model

    Science.gov (United States)

    Watkins, A. Neal; Lipford, William E.; Leighty, Bradley D.; Goodman, Kyle Z.; Goad, William K.; Goad, Linda R.

    2011-01-01

    This report will serve to present results of a test of the pressure sensitive paint (PSP) technique on the Common Research Model (CRM). This test was conducted at the National Transonic Facility (NTF) at NASA Langley Research Center. PSP data was collected on several surfaces with the tunnel operating in both cryogenic mode and standard air mode. This report will also outline lessons learned from the test as well as possible approaches to challenges faced in the test that can be applied to later entries.

  10. Sensitivity testing practice on pre-processing parameters in hard and soft coupled modeling

    Directory of Open Access Journals (Sweden)

    Z. Ignaszak

    2010-01-01

    Full Text Available This paper pays attention to the problem of practical applicability of coupled modeling with the use of hard and soft models types and necessity of adapted to that models data base possession. The data base tests results for cylindrical 30 mm diameter casting made of AlSi7Mg alloy were presented. In simulation tests that were applied the Calcosoft system with CAFE (Cellular Automaton Finite Element module. This module which belongs to „multiphysics” models enables structure prediction of complete casting with division of columnar and equiaxed crystals zones of -phase. Sensitivity tests of coupled model on the particular values parameters changing were made. On these basis it was determined the relations of CET (columnar-to-equaiaxed transition zone position influence. The example of virtual structure validation based on real structure with CET zone location and grain size was shown.

  11. Local Sensitivity and Diagnostic Tests

    NARCIS (Netherlands)

    Magnus, J.R.; Vasnev, A.L.

    2004-01-01

    In this paper we confront sensitivity analysis with diagnostic testing.Every model is misspecified, but a model is useful if the parameters of interest (the focus) are not sensitive to small perturbations in the underlying assumptions. The study of the e ect of these violations on the focus is calle

  12. Dual luminophor pressure-sensitive paint: III. Application to automotive model testing

    Science.gov (United States)

    Gouterman, Martin; Callis, James; Dalton, Larry; Khalil, Gamal; Mébarki, Youssef; Cooper, Kevin R.; Grenier, Michel

    2004-10-01

    Porphyrins play key roles in natural energy conversion systems, including photosynthesis and oxygen transport. Because of their chemical stability, unique optical properties and synthetic versatility, porphyrins are well suited as chemical sensors. One successful application is the use of platinum porphyrin (PtP) in pressure-sensitive paint (PSP). Oxygen in the film quenches luminescence, and oxygen pressure was initially monitored by measuring the ratio of I(wind-off)/I(wind-on). But this ratio is compromised if there is model motion and if the paint layer is inhomogeneous. Furthermore it requires careful monitoring and placement of light sources. Moreover, this method is seriously affected by temperature. The errors caused by model motion and temperature sensitivity are eliminated or greatly reduced using dual luminophor paint. This paper illustrates a successful application of a dual luminophor PSP in auto model testing. The PSP is made from an oxygen sensitive luminophor, Pt tetra(pentafluorophenyl)-porpholactone, which provides Isen, and Mg tetra(pentafluorophenyl)porphine, which provides temperature-sensitive paint (TSP) as the pressure-independent reference. The ratio PSP/TSP in the FIB polymer produced ideal PSP measurements with a very low-temperature dependence of -0.1% °C-1.

  13. Lagrangian model of zooplankton dispersion: numerical schemes comparisons and parameter sensitivity tests

    Institute of Scientific and Technical Information of China (English)

    QIU Zhongfeng; Andrea M. DOGLIOLI; HE Yijun; Francois CARLOTTI

    2011-01-01

    This paper presents two comparisons or tests for a Lagrangian model of zooplankton dispersion: numerical schemes and time steps. Firstly, we compared three numerical schemes using idealized circulations. Results show that the precisions of the advanced Adams-Bashfold-Moulton (ABM) method and the Runge-Kutta (RK) method were in the same order and both were much higher than that of the Euler method. Furthermore, the advanced ABM method is more efficient than the RK method in computational memory requirements and time consumption. We therefore chose the advanced ABM method as the Lagrangian particle-tracking algorithm. Secondly, we performed a sensitivity test for time steps, using outputs of the hydrodynamic model, Symphonie. Results show that the time step choices depend on the fluid response time that is related to the spatial resolution of velocity fields. The method introduced by Oliveira et al. in 2002 is suitable for choosing time steps of Lagrangian particle-tracking models, at least when only considering advection.

  14. Testing the Nanoparticle-Allostatic Cross Adaptation-Sensitization Model for Homeopathic Remedy Effects

    OpenAIRE

    Bell, Iris R.; Koithan, Mary; Brooks, Audrey J.

    2013-01-01

    Key concepts of the Nanoparticle-Allostatic Cross-Adaptation-Sensitization (NPCAS) Model for the action of homeopathic remedies in living systems include source nanoparticles as low level environmental stressors, heterotypic hormesis, cross-adaptation, allostasis (stress response network), time-dependent sensitization with endogenous amplification and bidirectional change, and self-organizing complex adaptive systems.

  15. Plot-scale testing and sensitivity analysis of Be7 based soil erosion conversion models

    Science.gov (United States)

    Taylor, Alex; Abdelli, Wahid; Barri, Bashar Al; Iurian, Andra; Gaspar, Leticia; Mabit, Lionel; Millward, Geoff; Ryken, Nick; Blake, Will

    2016-04-01

    an estimated amount of sediment delivered from the plot for comparison with the true mass captured. Sensitivity analysis was undertaken to evaluate the influence of (1) variability in Be-7 depth distribution, (2) selection of particle size correction factors and (3) potential loss of Be-7 in overland flow after SOF initiation on model output. Order of magnitude differences in sediment export estimates across the tested scenarios underpins the critical need for adequately addressing sources of uncertainty in experimental design and sampling programmes. Recommendations are made to improve methodological accuracy and confidence in model outputs.

  16. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error-based weighting and one objective function

    Science.gov (United States)

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall-runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error-based weighting of observation and prior information data, local sensitivity analysis, and single-objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  17. Parametric sensitivity analysis of a test cell thermal model using spectral analysis

    CERN Document Server

    Mara, Thierry Alex; Garde, François

    2012-01-01

    The paper deals with an empirical validation of a building thermal model. We put the emphasis on sensitivity analysis and on research of inputs/residual correlation to improve our model. In this article, we apply a sensitivity analysis technique in the frequency domain to point out the more important parameters of the model. Then, we compare measured and predicted data of indoor dry-air temperature. When the model is not accurate enough, recourse to time-frequency analysis is of great help to identify the inputs responsible for the major part of error. In our approach, two samples of experimental data are required. The first one is used to calibrate our model the second one to really validate the optimized model.

  18. Increasing the Depth of Current Understanding: Sensitivity Testing of Deep-Sea Larval Dispersal Models for Ecologists.

    Science.gov (United States)

    Ross, Rebecca E; Nimmo-Smith, W Alex M; Howell, Kerry L

    2016-01-01

    Larval dispersal is an important ecological process of great interest to conservation and the establishment of marine protected areas. Increasing numbers of studies are turning to biophysical models to simulate dispersal patterns, including in the deep-sea, but for many ecologists unassisted by a physical oceanographer, a model can present as a black box. Sensitivity testing offers a means to test the models' abilities and limitations and is a starting point for all modelling efforts. The aim of this study is to illustrate a sensitivity testing process for the unassisted ecologist, through a deep-sea case study example, and demonstrate how sensitivity testing can be used to determine optimal model settings, assess model adequacy, and inform ecological interpretation of model outputs. Five input parameters are tested (timestep of particle simulator (TS), horizontal (HS) and vertical separation (VS) of release points, release frequency (RF), and temporal range (TR) of simulations) using a commonly employed pairing of models. The procedures used are relevant to all marine larval dispersal models. It is shown how the results of these tests can inform the future set up and interpretation of ecological studies in this area. For example, an optimal arrangement of release locations spanning a release area could be deduced; the increased depth range spanned in deep-sea studies may necessitate the stratification of dispersal simulations with different numbers of release locations at different depths; no fewer than 52 releases per year should be used unless biologically informed; three years of simulations chosen based on climatic extremes may provide results with 90% similarity to five years of simulation; and this model setup is not appropriate for simulating rare dispersal events. A step-by-step process, summarising advice on the sensitivity testing procedure, is provided to inform all future unassisted ecologists looking to run a larval dispersal simulation.

  19. Increasing the Depth of Current Understanding: Sensitivity Testing of Deep-Sea Larval Dispersal Models for Ecologists

    Science.gov (United States)

    Nimmo-Smith, W. Alex M.; Howell, Kerry L.

    2016-01-01

    Larval dispersal is an important ecological process of great interest to conservation and the establishment of marine protected areas. Increasing numbers of studies are turning to biophysical models to simulate dispersal patterns, including in the deep-sea, but for many ecologists unassisted by a physical oceanographer, a model can present as a black box. Sensitivity testing offers a means to test the models’ abilities and limitations and is a starting point for all modelling efforts. The aim of this study is to illustrate a sensitivity testing process for the unassisted ecologist, through a deep-sea case study example, and demonstrate how sensitivity testing can be used to determine optimal model settings, assess model adequacy, and inform ecological interpretation of model outputs. Five input parameters are tested (timestep of particle simulator (TS), horizontal (HS) and vertical separation (VS) of release points, release frequency (RF), and temporal range (TR) of simulations) using a commonly employed pairing of models. The procedures used are relevant to all marine larval dispersal models. It is shown how the results of these tests can inform the future set up and interpretation of ecological studies in this area. For example, an optimal arrangement of release locations spanning a release area could be deduced; the increased depth range spanned in deep-sea studies may necessitate the stratification of dispersal simulations with different numbers of release locations at different depths; no fewer than 52 releases per year should be used unless biologically informed; three years of simulations chosen based on climatic extremes may provide results with 90% similarity to five years of simulation; and this model setup is not appropriate for simulating rare dispersal events. A step-by-step process, summarising advice on the sensitivity testing procedure, is provided to inform all future unassisted ecologists looking to run a larval dispersal simulation. PMID

  20. Testing the role of reward and punishment sensitivity in avoidance behavior: a computational modeling approach.

    Science.gov (United States)

    Sheynin, Jony; Moustafa, Ahmed A; Beck, Kevin D; Servatius, Richard J; Myers, Catherine E

    2015-04-15

    Exaggerated avoidance behavior is a predominant symptom in all anxiety disorders and its degree often parallels the development and persistence of these conditions. Both human and non-human animal studies suggest that individual differences as well as various contextual cues may impact avoidance behavior. Specifically, we have recently shown that female sex and inhibited temperament, two anxiety vulnerability factors, are associated with greater duration and rate of the avoidance behavior, as demonstrated on a computer-based task closely related to common rodent avoidance paradigms. We have also demonstrated that avoidance is attenuated by the administration of explicit visual signals during "non-threat" periods (i.e., safety signals). Here, we use a reinforcement-learning network model to investigate the underlying mechanisms of these empirical findings, with a special focus on distinct reward and punishment sensitivities. Model simulations suggest that sex and inhibited temperament are associated with specific aspects of these sensitivities. Specifically, differences in relative sensitivity to reward and punishment might underlie the longer avoidance duration demonstrated by females, whereas higher sensitivity to punishment might underlie the higher avoidance rate demonstrated by inhibited individuals. Simulations also suggest that safety signals attenuate avoidance behavior by strengthening the competing approach response. Lastly, several predictions generated by the model suggest that extinction-based cognitive-behavioral therapies might benefit from the use of safety signals, especially if given to individuals with high reward sensitivity and during longer safe periods. Overall, this study is the first to suggest cognitive mechanisms underlying the greater avoidance behavior observed in healthy individuals with different anxiety vulnerabilities.

  1. Test of the notch technique for determining the radial sensitivity of the optical model potential

    CERN Document Server

    Yang, Lei; Jia, Hui-ming; Xu, Xin-Xing; Ma, Nan-Ru; Sun, Li-Jie; Yang, Feng; Zhang, Huan-Qiao; Li, Zu-Hua; Wang, Dong-Xi

    2015-01-01

    Detailed investigations on the notch technique are performed on the ideal data generated by the optical model potential parameters extracted from the 16O+208Pb system at the laboratory energy of 129.5 MeV, to study the sensitivities of this technique on the model parameters as well as the experimental data. It is found that, for the perturbation parameters, a sufficient large reduced fraction and an appropriate small perturbation width are necessary to determine the accurate radial sensitivity; while for the potential parameters, almost no dependence was observed. For the experimental measurements, the number of data points has little influence for the heavy target system, and the relative inner information of the nuclear potential can be derived when the measurement extended to a lower cross section.

  2. Willingness to pay and size of health benefit: an integrated model to test for 'sensitivity to scale'.

    Science.gov (United States)

    Yeung, Raymond Y T; Smith, Richard D; McGhee, Sarah M

    2003-09-01

    A key theoretical prediction concerning willingness to pay is that it is positively correlated with benefit size and is assessed by testing the 'sensitivity to scale (scope)'. 'External' (between-sample) sensitivity tests are usually regarded as less powerful than 'internal' (within-subject) tests. However, the latter may suffer from 'anchoring' effects. This paper studies the statistical power of these tests by questioning the distributional assumption of empirical data. We present an integrated model to capture both internal and external variations, while controlling for sample heterogeneity, applied to data from a survey estimating the value of reducing symptom-days. Results indicate that once data is properly transformed, WTP becomes 'scale sensitive' and consistent with diminishing marginal utility theory.

  3. An analysis of sensitivity tests

    Energy Technology Data Exchange (ETDEWEB)

    Neyer, B.T.

    1992-03-06

    A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.

  4. Tests of methods and software for set-valued model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Janssen PHM; Sanders R; CWM

    1995-01-01

    Testen worden besproken die zijn uitgevoerd op methoden en software voor calibratie middels 'rotated-random-scanning', en voor gevoeligheidsanalyse op basis van de 'dominant direction analysis' en de 'generalized sensitivity analysis'. Deze technieken werden recentel

  5. Sensitivity Tests of a Surface-Layer Windflow Model to Effects of Stability and Vegetation.

    Science.gov (United States)

    1985-10-25

    ORGANIZATION [b. OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION Air Force Geophysics L.aborator y LYA 6. ADDRESS cis SIdtt..d /11 CIdP 7tb ADDRESSL.1...between 1 " J University 12 reasonably accurate wind and 2 km above speeds and directions while ground level (AGL), incorporating effects of sur- so it is...several reasons why a variational approach is appropriate for this 4modeling application. Ball and Johnson stated that the advection terms of the

  6. Multilayer Cloud Detection Using MODIS: Sensitivity Tests Using a Forward Model

    Science.gov (United States)

    Wind, G.; Platnick, S.; King, M. D.

    2008-05-01

    The most recent processing effort for the MODIS Atmosphere Team, referred to as the Collection 5 stream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 km resolution. In order to investigate further the performance of the multilayer cloud detection algorithm we have run a set of forward models of multilayer clouds of varying layer separation, thermodynamic phase, optical and microphysical properties and varying surface and atmospheric conditions using the DISORT radiative transfer code. The model output, in the form of equivalent reflectances in the MODIS bands is then used as input to the operational MODIS cloud optical and microphysical properties retrieval algorithm and results are compared to the known truth of the DISORT input. We will present the results of this investigation with an emphasis on the applicability and skill of the MODIS multilayer cloud detection algorithm.

  7. Predictive modeling for diagnostic tests with high specificity, but low sensitivity: a study of the glycerol test in patients with suspected Meniere's disease.

    Directory of Open Access Journals (Sweden)

    Bernd Lütkenhöner

    Full Text Available A high specificity does not ensure that the expected benefit of a diagnostic test outweighs its cost. Problems arise, in particular, when the investigation is expensive, the prevalence of a positive test result is relatively small for the candidate patients, and the sensitivity of the test is low so that the information provided by a negative result is virtually negligible. The consequence may be that a potentially useful test does not gain broader acceptance. Here we show how predictive modeling can help to identify patients for whom the ratio of expected benefit and cost reaches an acceptable level so that testing these patients is reasonable even though testing all patients might be considered wasteful. Our application example is based on a retrospective study of the glycerol test, which is used to corroborate a suspected diagnosis of Menière's disease. Using the pretest hearing thresholds at up to 10 frequencies, predictions were made by K-nearest neighbor classification or logistic regression. Both methods estimate, based on results from previous patients, the posterior probability that performing the considered test in a new patient will have a positive outcome. The quality of the prediction was evaluated using leave-one-out cross-validation, making various assumptions about the costs and benefits of testing. With reference to all 356 cases, the probability of a positive test result was almost 0.4. For subpopulations selected by K-nearest neighbor classification, which was clearly superior to logistic regression, this probability could be increased up to about 0.6. Thus, the odds of a positive test result were more than doubled.

  8. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    Directory of Open Access Journals (Sweden)

    A. Nickless

    2014-05-01

    Full Text Available This is the second part of a two-part paper considering network design based on a Lagrangian stochastic particle dispersion model (LPDM, aimed at reducing the uncertainty of the flux estimates achievable for the region of interest by the continuous observation of atmospheric CO2 concentrations at fixed monitoring stations. The LPDM model, which can be used to derive the sensitivity matrix used in an inversion, was run for each potential site for the months of July (representative of the Southern Hemisphere Winter and January (Summer. The magnitude of the boundary contributions to each potential observation site was tested to determine its inclusion in the network design, but found to be minimal. Through the use of the Bayesian inverse modelling technique, the sensitivity matrix, together with the prior estimates for the covariance matrices of the observations and surface fluxes were used to calculate the posterior covariance matrix of the estimated fluxes, used for the calculation of the cost function of the optimisation procedure. The optimisation procedure was carried out for South Africa under a standard set of conditions, similar to those applied to the Australian test case in Part 1, for both months and for the combined two months. The conditions were subtly changed, one at a time, and the optimisation routine re-run under each set of modified conditions, and compared to the original optimal network design. The results showed that changing the height of the surface grid cells, including an uncertainty estimate for the oceans, or increasing the night time observational uncertainty did not result in any major changes in the positioning of the stations relative to the basic design, but changing the covariance matrix or increasing the spatial resolution did. The genetic algorithm was able to find a slightly better solution than the incremental optimisation procedure, but did not drastically alter the solution compared to the standard case

  9. An in vitro method for detecting chemical sensitization using human reconstructed skin models and its applicability to cosmetic, pharmaceutical, and medical device safety testing.

    Science.gov (United States)

    McKim, James M; Keller, Donald J; Gorski, Joel R

    2012-12-01

    Chemical sensitization is a serious condition caused by small reactive molecules and is characterized by a delayed type hypersensitivity known as allergic contact dermatitis (ACD). Contact with these molecules via dermal exposure represent a significant concern for chemical manufacturers. Recent legislation in the EU has created the need to develop non-animal alternative methods for many routine safety studies including sensitization. Although most of the alternative research has focused on pure chemicals that possess reasonable solubility properties, it is important for any successful in vitro method to have the ability to test compounds with low aqueous solubility. This is especially true for the medical device industry where device extracts must be prepared in both polar and non-polar vehicles in order to evaluate chemical sensitization. The aim of this research was to demonstrate the functionality and applicability of the human reconstituted skin models (MatTek Epiderm(®) and SkinEthic RHE) as a test system for the evaluation of chemical sensitization and its potential use for medical device testing. In addition, the development of the human 3D skin model should allow the in vitro sensitization assay to be used for finished product testing in the personal care, cosmetics, and pharmaceutical industries. This approach combines solubility, chemical reactivity, cytotoxicity, and activation of the Nrf2/ARE expression pathway to identify and categorize chemical sensitizers. Known chemical sensitizers representing extreme/strong-, moderate-, weak-, and non-sensitizing potency categories were first evaluated in the skin models at six exposure concentrations ranging from 0.1 to 2500 µM for 24 h. The expression of eight Nrf2/ARE, one AhR/XRE and two Nrf1/MRE controlled gene were measured by qRT-PCR. The fold-induction at each exposure concentration was combined with reactivity and cytotoxicity data to determine the sensitization potential. The results demonstrated that

  10. Using High-Resolution Data to Test Parameter Sensitivity of the Distributed Hydrological Model HydroGeoSphere

    Directory of Open Access Journals (Sweden)

    Thomas Cornelissen

    2016-05-01

    Full Text Available Parameterization of physically based and distributed hydrological models for mesoscale catchments remains challenging because the commonly available data base is insufficient for calibration. In this paper, we parameterize a mesoscale catchment for the distributed model HydroGeoSphere by transferring evapotranspiration parameters calibrated at a highly-equipped headwater catchment in addition to literature data. Based on this parameterization, the sensitivity of the mesoscale catchment to spatial variability in land use, potential evapotranspiration and precipitation and of the headwater catchment to mesoscale soil and land use data was conducted. Simulations of the mesoscale catchment with transferred parameters reproduced daily discharge dynamics and monthly evapotranspiration of grassland, deciduous and coniferous vegetation in a satisfactory manner. Precipitation was the most sensitive input data with respect to total runoff and peak flow rates, while simulated evapotranspiration components and patterns were most sensitive to spatially distributed land use parameterization. At the headwater catchment, coarse soil data resulted in a change in runoff generating processes based on the interplay between higher wetness prior to a rainfall event, enhanced groundwater level rise and accordingly, lower transpiration rates. Our results indicate that the direct transfer of parameters is a promising method to benefit highly equipped simulations of the headwater catchments.

  11. Testing a river basin model with sensitivity analysis and autocalibration for an agricultural catchment in SW Finland

    Directory of Open Access Journals (Sweden)

    S. TATTARI

    2008-12-01

    Full Text Available Modeling tools are needed to assess (i the amounts of loading from agricultural sources to water bodies as well as (ii the alternative management options in varying climatic conditions. These days, the implementation of Water Framework Directive (WFD has put totally new requirements also for modeling approaches. The physically based models are commonly not operational and thus the usability of these models is restricted for a few selected catchments. But the rewarding feature of these process-based models is an option to study the effect of protection measures on a catchment scale and, up to a certain point, a possibility to upscale the results. In this study, the parameterization of the SWAT model was developed in terms of discharge dynamics and nutrient loads, and a sensitivity analysis regarding discharge and sediment concentration was made. The SWAT modeling exercise was carried out for a 2nd order catchment (Yläneenjoki, 233 km2 of the Eurajoki river basin in southwestern Finland. The Yläneenjoki catchment has been intensively monitored during the last 14 years. Hence, there was enough background information available for both parameter setup and calibration. In addition to load estimates, SWAT also offers possibility to assess the effects of various agricultural management actions like fertilization, tillage practices, choice of cultivated plants, buffer strips, sedimentation ponds and constructed wetlands (CWs on loading. Moreover, information on local agricultural practices and the implemented and planned protective measures were readily available thanks to aware farmers and active authorities. Here, we studied how CWs can reduce the nutrient load at the outlet of the Yläneenjoki river basin. The results suggested that sensitivity analysis and autocalibration tools incorporated in the model are useful by pointing out the most influential parameters, and that flow dynamics and annual loading values can be modeled with reasonable

  12. (abstract) Using TOPEX/Poseidon Sea Level Observations to Test the Sensitivity of an Ocean Model to Wind Forcing

    Science.gov (United States)

    Fu, Lee-Lueng; Chao, Yi

    1996-01-01

    It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.

  13. Comparison of the sensitivities of the Buehler test and the guinea pig maximization test for predictive testing of contact allergy

    DEFF Research Database (Denmark)

    Frankild, S; Vølund, A; Wahlberg, J E;

    2001-01-01

    dose-response model. To compare the sensitivity of the 2 test procedures the test conditions were kept identical and the following chemicals with a range of sensitization potentials were tested: chloraniline, chlorhexidine, eugenol, formaldehyde, mercaptobenzothiazole and neomycin sulphate....... Formaldehyde and neomycin sulphate were strong sensitizers in both tests. Mercaptobenzothiazole, eugenol and chloraniline were all strong sensitizers in the GPMT, eugenol and mercaptobenzothiazole were negative in the Buehler test and equivocal results were obtained with chloraniline. Chlorhexidine...

  14. Sensitive neutralization test for rubella antibody.

    Science.gov (United States)

    Sato, H; Albrecht, P; Krugman, S; Ennis, F A

    1979-01-01

    A modified rubella virus plaque neutralization test for measuring rubella antibody was developed based on the potentiation of the virus-antibody complex by heterologous anti-immunoglobulin. The test is highly sensitive, yielding titers on the average 50 to 100 times higher than the haemagglutination inhibition test or the conventional plaque neutralization test. The sensitivity of this enhanced neutralization test is somewhat limited by the existence of a prozone phenomenon which precludes testing of low-titered sera below a dilution of 1:16. No prozone effect was observed with cerebrospinal fluids. The specificity of the enhanced neutralization test was determined by seroconversion of individuals receiving rubella vaccine. Although the rubella hemagglutination inhibition test remains the test of choice in routine diagnostic and surveillance work, the enhanced rubella neutralization test is particularly useful in monitoring low-level antibody in the cerebrospinal fluid in patients with neurological disorders and in certain instances of vaccine failure. PMID:107192

  15. Sensitivity testing of the model set-up used for calculation of photochemical ozone creation potentials (POCP) under European conditions

    Energy Technology Data Exchange (ETDEWEB)

    Altenstedt, J.; Pleijel, K.

    1998-02-01

    Photochemical Ozone Creation Potentials (POCP) is a method to rank VOC, relative to other VOC, according to their ability to produce ground level ozone. To obtain POCP values valid under European conditions, a critical analysis of the POCP concept has been performed using the IVL photochemical trajectory model. The critical analysis has concentrated on three VOC (ethene, n-butane and o-xylene) and has analysed the effect on their POCP values when different model parameters were varied. The three species were chosen because of their different degradation mechanisms in the atmosphere and thus their different abilities to produce ozone. The model parameters which have been tested include background emissions, initial concentrations, dry deposition velocities, the features of the added point source and meteorological parameters. The critical analysis shows that the background emissions of NO{sub x} and VOC have a critical impact on the POCP values. The hour of the day for the point source emission also shows a large influence on the POCP values. Other model parameters which have been studied have not shown such large influence on the POCP values. Based on the critical analysis a model set-up for calculation of POCP is defined. The variations in POCP values due to changes in the background emissions of NO{sub x} and VOC are so large that they can not be disregarded in the calculation of POCP. It is recommended to calculate POCP ranges based on the extremes in POCP values instead of calculating site specific POCP values. Four individual emission scenarios which produced the extremes in POCP values in the analysis have been selected for future calculation of POCP ranges. The scenarios are constructed based on the emissions in Europe and the resulting POCP ranges are thus intended to be applicable within Europe 67 refs, 61 figs, 16 tabs

  16. Effects of snow grain non-sphericity on climate simulations: Sensitivity tests with the NorESM model

    Science.gov (United States)

    Räisänen, Petri; Makkonen, Risto; Kirkevåg, Alf

    2017-04-01

    optically thick snowpack with a given snow grain effective size, the absorbing aerosol RE is smaller for non-spherical than for spherical snow grains. The reason for this is that due to the lower asymmetry parameter of the non-spherical snow grains, solar radiation does not penetrate as deep in snow as in the case of spherical snow grains. However, in a climate model simulation, the RE is sensitive to patterns of aerosol deposition and simulated snow cover. In fact, the global land-area mean absorbing aerosol RE is larger in the NONSPH than SPH experiment (0.193 vs. 0.168 W m-2), owing to later snowmelt in spring.

  17. Component resolved testing for allergic sensitization

    DEFF Research Database (Denmark)

    Skamstrup Hansen, Kirsten; Poulsen, Lars K

    2010-01-01

    Component resolved diagnostics introduces new possibilities regarding diagnosis of allergic diseases and individualized, allergen-specific treatment. Furthermore, refinement of IgE-based testing may help elucidate the correlation or lack of correlation between allergenic sensitization and allergi...

  18. Parametric Sensitivity Tests- European PEM Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests...... performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature...

  19. Use of simulation modeling to estimate herd-level sensitivity, specificity, and predictive values of diagnostic tests for detection of tuberculosis in cattle.

    Science.gov (United States)

    Norby, Bo; Bartlett, Paul C; Grooms, Daniel L; Kaneene, John B; Bruning-Fann, Colleen S

    2005-07-01

    To estimate herd-level sensitivity (HSe), specificity (HSp), and predictive values for a positive (HPVP) and negative (HPVN) test result for several testing scenarios for detection of tuberculosis in cattle by use of simulation modeling. Empirical distributions of all herds (15,468) and herds in a 10-county area (1,016) in Michigan. 5 test scenarios were simulated: scenario 1, serial interpretation of the caudal fold tuberculin (CFT) test and comparative cervical test (CCT); scenario 2, serial interpretation of the CFT test and CCT, microbial culture for mycobacteria, and polymerase chain reaction assay; scenario 3, same as scenario 2 but specificity was fixed at 1.0; and scenario 4, sensitivity was 0.9 (scenario 4a) or 0.95 (scenario 4b), and specificity was fixed at 1.0. Estimates for HSe were reasonably high, ranging between 0.712 and 0.840. Estimates for HSp were low when specificity was not fixed at 1.0. Estimates of HPVP were low for scenarios 1 and 2 (0.042 and 0.143, respectively) but increased to 1.0 when specificity was fixed at 1.0. The HPVN remained high for all 5 scenarios, ranging between 0.995 and 0.997. As herd size increased, HSe increased and HSp and HPVP decreased. However, fixing specificity at 1.0 had only minor effects on HSp and HPVN, but HSe was low when the herd size was small. Tests used for detecting cattle herds infected with tuberculosis work well on a herd basis. Herds with < approximately 100 cattle should be tested more frequently or for a longer duration than larger herds to ensure that these small herds are free of tuberculosis.

  20. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available et al., 1999; Rödenbeck et al., 2003; Chevallier et al., 2010). This method relies on precision measurements of atmo- spheric CO2 to refine the prior estimates of the fluxes. Using this theory, an optimal network of new measurement sites... of the South African network design, these variables are produced by the CSIRO Conformal-Cubic Atmospheric Model (CCAM), a global circulation model. CCAM is a two time-level semi-implicit hydrostatic primi- tive equation developed by McGregor (1987) and later...

  1. The role of anxiety sensitivity cognitive concerns in suicidal ideation: A test of the Depression-Distress Amplification Model in clinical outpatients.

    Science.gov (United States)

    Norr, Aaron M; Allan, Nicholas P; Macatee, Richard J; Capron, Daniel W; Schmidt, Norman B

    2016-04-30

    Suicide constitutes a significant public health burden as global suicide rates continue to increase. Thus, it is crucial to identify malleable suicide risk factors to develop prevention protocols. Anxiety sensitivity, or a fear of anxiety-related sensations, is a potential malleable risk factor for the development of suicidal ideation. The Depression-Distress Amplification Model (DDAM) posits that the anxiety sensitivity cognitive concerns (ASCC) subfactor interacts with depressive symptoms to amplify the effects of depression and lead to suicidal ideation. The current study tested the DDAM across the two most widely-replicated factors of depressive symptoms (cognitive and affective/somatic) in comparison to a risk factor mediation model where ASCC are related to suicidal ideation via depressive symptoms. Participants included 295 clinical outpatients from a community clinic. The interaction between ASCC and depressive symptoms in the prediction of suicidal ideation was not significant for either cognitive or affective/somatic symptoms of depression. However, results revealed a significant indirect effect of ASCC through cognitive symptoms of depression in the prediction of suicidal ideation. These cross sectional findings are not consistent with the DDAM. Rather, the relationship may be better conceptualized with a model in which ASCC is related to suicidal ideation via cognitive symptoms of depression.

  2. Component resolved testing for allergic sensitization

    DEFF Research Database (Denmark)

    Skamstrup Hansen, Kirsten; Poulsen, Lars K

    2010-01-01

    disease. Novel tools to predict severe outcomes and to plan for allergen-specific treatment are necessary, and because only a small amount of blood is needed to test for a multitude of allergens and allergenic components, component resolved diagnostics is promising. A drawback is the risk of overdiagnosis......Component resolved diagnostics introduces new possibilities regarding diagnosis of allergic diseases and individualized, allergen-specific treatment. Furthermore, refinement of IgE-based testing may help elucidate the correlation or lack of correlation between allergenic sensitization and allergic...... and misinterpretation of the complex results of such tests. Also, the practical use and selection of allergenic components need to be evaluated in large studies including well-characterized patients and healthy, sensitized controls and with representation of different geographical regions....

  3. Validation, replication, and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.

    Directory of Open Access Journals (Sweden)

    Samuel J Clark

    Full Text Available A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.

  4. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  5. Economic modeling and sensitivity analysis.

    Science.gov (United States)

    Hay, J W

    1998-09-01

    The field of pharmacoeconomics (PE) faces serious concerns of research credibility and bias. The failure of researchers to reproduce similar results in similar settings, the inappropriate use of clinical data in economic models, the lack of transparency, and the inability of readers to make meaningful comparisons across published studies have greatly contributed to skepticism about the validity, reliability, and relevance of these studies to healthcare decision-makers. Using a case study in the field of lipid PE, two suggestions are presented for generally applicable reporting standards that will improve the credibility of PE. Health economists and researchers should be expected to provide either the software used to create their PE model or a multivariate sensitivity analysis of their PE model. Software distribution would allow other users to validate the assumptions and calculations of a particular model and apply it to their own circumstances. Multivariate sensitivity analysis can also be used to present results in a consistent and meaningful way that will facilitate comparisons across the PE literature. Using these methods, broader acceptance and application of PE results by policy-makers would become possible. To reduce the uncertainty about what is being accomplished with PE studies, it is recommended that these guidelines become requirements of both scientific journals and healthcare plan decision-makers. The standardization of economic modeling in this manner will increase the acceptability of pharmacoeconomics as a practical, real-world science.

  6. MUTZ-3-derived dendritic cells as an in vitro alternative model to CD34+ progenitor-derived dendritic cells for testing of chemical sensitizers.

    Science.gov (United States)

    Nelissen, Inge; Selderslaghs, Ingrid; Heuvel, Rosette Van Den; Witters, Hilda; Verheyen, Geert R; Schoeters, Greet

    2009-12-01

    The cytokine-dependent CD34(+) human acute myeloid leukaemia cell line MUTZ-3 was used to generate immature dendritic-like cells (MUTZ-3 DC) and their validity as an alternative to primary CD34(+) progenitor-derived DC (CD34-DC) for testing chemical-induced sensitization was assessed. Expression levels of the DC maturation markers HLA-DR, CD86, CD83 and CD11c were studied using flow cytometry after 24 and 48 h exposure to the model compound nickel sulphate (100 and 300 microM). No maturation of MUTZ-3 DC was observed, whereas significantly upregulated expression levels of CD83 and CD86 were noticed in CD34-DC after 24h treatment with 300 microM nickel sulphate compared to control cells. Differential expression of the cytokine genes IL1beta, IL6, IL8, CCL2, CCL3, CCL3L1, CCL4 was analyzed using real-time RT-PCR after 6, 10 and 24h of nickel sulphate exposure. In response to 100 microM nickel sulphate MUTZ-3 DC revealed slightly upregulated mRNA levels after 24h, whereas 300 microM induced transcription of CCL3, CCL3L1 and IL8 significantly after 6 or 10h. These cytokine data correspond to the previously observed effects of 100 microM nickel sulphate in CD34-DC. Our findings underline the stimulatory capacity of nickel sulphate in MUTZ-3 DC with regard to cytokine mRNA induction, but not surface marker expression. Compared to CD34-DC, however, the studied endpoint markers seemed to be less inducible, making the MUTZ-3 DC model in its presented form less suitable for in vitro testing of sensitization. Further assessment of MUTZ-3 DC using other differentiation protocols and an extended set of chemicals will be required to reveal whether this cell line may be a valid alternative model system to primary CD34-DC.

  7. Echinodermata in ecotoxicological tests: maintenance and sensitivity

    Directory of Open Access Journals (Sweden)

    Jocássio Batista Soares

    2016-03-01

    Full Text Available Abstract This work investigates the sensitivity of four species of Echinodermata (Lytechinus variegatus, Echinometra lucunter, Arbacia lixula and Encope emarginata, evaluating the effect of five reference toxicants (Cd, Pb, Cr, Cu and SDS on embryo-larval development, following the official protocols. It also evaluates techniques for the maintenance of L. variegatus in the laboratory, changes in its sensitivity, and the effects of chemical agents that induce the release of gametes, on the survival rates of the organisms. In terms of the maintenance of L. variegatus in the laboratory, the diet with vegetable content appears to be more favorable for maintenance and maturation in cultivation tanks. Chemical inducers such as KCl and the Anesthetic (lidocaine and epinephrine resulted in high adult mortality rates, discouraging its re-induction. The tests performed with different species of sea urchin and sand dollar, using different reference toxicants, showed no variations in sensitivity to the more toxic chemicals, indicating that different species can be used for evaluation and environmental impact assessment.

  8. On the use of sensitivity tests in seismic tomography

    NARCIS (Netherlands)

    Rawlinson, N.; Spakman, W.|info:eu-repo/dai/nl/074103164

    2016-01-01

    Sensitivity analysis with synthetic models is widely used in seismic tomography as a means for assessing the spatial resolution of solutions produced by, in most cases, linear or iterative nonlinear inversion schemes. The most common type of synthetic reconstruction test is the so-called checkerboar

  9. On the use of sensitivity tests in seismic tomography

    NARCIS (Netherlands)

    Rawlinson, N.; Spakman, W.|info:eu-repo/dai/nl/074103164

    2016-01-01

    Sensitivity analysis with synthetic models is widely used in seismic tomography as a means for assessing the spatial resolution of solutions produced by, in most cases, linear or iterative nonlinear inversion schemes. The most common type of synthetic reconstruction test is the so-called

  10. On the use of sensitivity tests in seismic tomography

    NARCIS (Netherlands)

    Rawlinson, N.; Spakman, W.

    2016-01-01

    Sensitivity analysis with synthetic models is widely used in seismic tomography as a means for assessing the spatial resolution of solutions produced by, in most cases, linear or iterative nonlinear inversion schemes. The most common type of synthetic reconstruction test is the so-called checkerboar

  11. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  12. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    Science.gov (United States)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  13. Inverse modeling of cloud-aerosol interactions -- Part 2: Sensitivity tests on liquid phase clouds using a Markov Chain Monte Carlo based simulation approach

    NARCIS (Netherlands)

    Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Struthers, H.; Sooroshian, A.

    2012-01-01

    This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC) algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools t

  14. Inverse modelling of cloud-aerosol interactions – Part 2: Sensitivity tests on liquid phase clouds using a Markov chain Monte Carlo based simulation approach

    Directory of Open Access Journals (Sweden)

    D. G. Partridge

    2012-03-01

    Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the global sensitivity of a cloud model to input aerosol physiochemical parameters. Using numerically generated cloud droplet number concentration (CDNC distributions (i.e. synthetic data as cloud observations, this inverse modelling framework is shown to successfully estimate the correct calibration parameters, and their underlying posterior probability distribution.

    The employed analysis method provides a new, integrative framework to evaluate the global sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode aerosol and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insights. There is a transition in relative sensitivity from very clean marine Arctic conditions where the lognormal aerosol parameters representing the accumulation mode aerosol number concentration and mean radius and are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm−3 where particle chemistry is more important than both number concentration and size of the accumulation mode.

    The competition and compensation between the cloud model input parameters illustrates that if the soluble mass fraction is reduced, the aerosol number concentration, geometric standard deviation and mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.

    This study demonstrates that inverse modelling provides a flexible, transparent and

  15. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2003-06-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30–60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg(O3 from 273 to 299 Tg(O(3. Thus, there is a spread of ±35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  16. Inverse modeling of cloud-aerosol interactions – Part 2: Sensitivity tests on liquid phase clouds using a Markov Chain Monte Carlo based simulation approach

    Directory of Open Access Journals (Sweden)

    D. G. Partridge

    2011-07-01

    Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov Chain Monte Carlo (MCMC algorithm to a pseudo-adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the sensitivity of a cloud model to input aerosol physiochemical parameters. Using synthetic data as observed values of cloud droplet number concentration (CDNC distribution, this inverse modelling framework is shown to successfully converge to the correct calibration parameters.

    The employed analysis method provides a new, integrative framework to evaluate the sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insightful findings. There is a clear transition from very clean marine Arctic conditions where the aerosol parameters representing the mean radius and geometric standard deviation of the accumulation mode are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm−3 where particle chemistry is more important than both number concentration and size of the accumulation mode.

    The competition and compensation between the cloud model input parameters illustrate that if the soluble mass fraction is reduced, both the number of particles and geometric standard deviation must increase and the mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.

    For more polluted aerosol conditions, with a reduction in soluble mass fraction the parameter correlation becomes weaker and more non-linear over the range of possible solutions

  17. Highly sensitive silicon microreactor for catalyst testing

    DEFF Research Database (Denmark)

    Henriksen, Toke Riishøj; Olsen, Jakob Lind; Vesborg, Peter Christian Kjærgaard;

    2009-01-01

    by directing the entire gas flow through the catalyst bed to a mass spectrometer, thus ensuring that nearly all reaction products are present in the analyzed gas flow. Although the device can be employed for testing a wide range of catalysts, the primary aim of the design is to allow characterization of model...... catalysts which can only be obtained in small quantities. Such measurements are of significant fundamental interest but are challenging because of the low surface areas involved. The relationship between the reaction zone gas flow and the pressure in the reaction zone is investigated experimentally......, it is found that platinum catalysts with areas as small as 15 mu m(2) are conveniently characterized with the device. (C) 2009 American Institute of Physics. [doi:10.1063/1.3270191]...

  18. Lamb waves increase sensitivity in nondestructive testing

    Science.gov (United States)

    Di Novi, R.

    1967-01-01

    Lamb waves improve sensitivity and resolution in the detection of small defects in thin plates and small diameter, thin-walled tubing. This improvement over shear waves applies to both longitudinal and transverse flaws in the specimens.

  19. Retrospective evaluation of the consequence of alleged patch test sensitization

    DEFF Research Database (Denmark)

    Jensen, Charlotte D; Paulsen, Evy; Andersen, Klaus E

    2006-01-01

    consequences in cases of possible patch test sensitization. Among 7619 consecutively tested eczema patients in a 14-year period 26 (0.3%) were identified in the database as having had a late patch test reaction, which may be an indication of patch test sensitization. 9 of these cases were not suitable....... For the remaining 11 patients we could not rule out that they were patch test sensitized, and they were investigated further. 1 was diseased and 10 were interviewed regarding the possible consequences of the alleged patch test sensitization. 9 had not experienced any dermatitis problems, and 1 could not exclude...

  20. Structure-activity models for contact sensitization.

    Science.gov (United States)

    Fedorowicz, Adam; Singh, Harshinder; Soderholm, Sidney; Demchuk, Eugene

    2005-06-01

    Allergic contact dermatitis (ACD) is a widespread cause of workers' disabilities. Although some substances found in the workplace are rigorously tested, the potential of the vast majority of chemicals to cause skin sensitization remains unknown. At the same time, exhaustive testing of all chemicals in workplaces is costly and raises ethical concerns. New approaches to developing information for risk assessment based on computational (quantitative) structure-activity relationship [(Q)SAR] methods may be complementary to and reduce the need for animal testing. Virtually any number of existing, de novo, and even preconceived compounds can be screened in silico at a fraction of the cost of animal testing. This work investigates the utility of ACD (Q)SAR modeling from the occupational health perspective using two leading software products, DEREK for Windows and TOPKAT, and an original method based on logistic regression methodology. It is found that the correct classification of (Q)SAR predictions for guinea pig data achieves values of 73.3, 82.9, and 87.6% for TOPKAT, DEREK for Windows, and the logistic regression model, respectively. The correct classification using LLNA data equals 73.0 and 83.2% for DEREK for Windows and the logistic regression model, respectively.

  1. Experimental-based Modelling and Simulation of Water Hydraulic Mechatronics Test Facilities for Motion Control and Operation in Environmental Sensitive Applications` Areas

    DEFF Research Database (Denmark)

    Conrad, Finn; Pobedza, J.; Sobczyk, A.

    2003-01-01

    proportional valves and servo actuators for motion control and power transmission undertaken in co-operation by Technical University, DTU and Cracow University of Technology, CUT. The results of this research co-operation include engineering design and test of simulation models compared with two mechatronic......The paper presents experimental-based modelling, simulation, analysis and design of water hydraulic actuators for motion control of machines, lifts, cranes and robots. The contributions includes results from on-going research projects on fluid power and mechatronics based on tap water hydraulic...... test rig facilities powered by environmental friendly water hydraulic servo actuator system. Test rigs with measurement and data acquisition system were designed and build up with tap water hydraulic components of the Danfoss Nessie® product family. This paper presents selected experimental...

  2. Experimental-based Modelling and Simulation of Water Hydraulic Mechatronics Test Facilities for Motion Control and Operation in Environmental Sensitive Applications` Areas

    DEFF Research Database (Denmark)

    Conrad, Finn; Pobedza, J.; Sobczyk, A.

    2003-01-01

    The paper presents experimental-based modelling, simulation, analysis and design of water hydraulic actuators for motion control of machines, lifts, cranes and robots. The contributions includes results from on-going research projects on fluid power and mechatronics based on tap water hydraulic...... proportional valves and servo actuators for motion control and power transmission undertaken in co-operation by Technical University, DTU and Cracow University of Technology, CUT. The results of this research co-operation include engineering design and test of simulation models compared with two mechatronic...

  3. Laboratory Tests of Chameleon Models

    CERN Document Server

    Brax, Philippe; Davis, Anne-Christine; Shaw, Douglas

    2009-01-01

    We present a cursory overview of chameleon models of dark energy and their laboratory tests with an emphasis on optical and Casimir experiments. Optical experiments measuring the ellipticity of an initially polarised laser beam are sensitive to the coupling of chameleons to photons. The next generation of Casimir experiments may be able to unravel the nature of the scalar force mediated by the chameleon between parallel plates.

  4. Latent sensitization: a model for stress-sensitive chronic pain.

    Science.gov (United States)

    Marvizon, Juan Carlos; Walwyn, Wendy; Minasyan, Ani; Chen, Wenling; Taylor, Bradley K

    2015-04-01

    Latent sensitization is a rodent model of chronic pain that reproduces both its episodic nature and its sensitivity to stress. It is triggered by a wide variety of injuries ranging from injection of inflammatory agents to nerve damage. It follows a characteristic time course in which a hyperalgesic phase is followed by a phase of remission. The hyperalgesic phase lasts between a few days to several months, depending on the triggering injury. Injection of μ-opioid receptor inverse agonists (e.g., naloxone or naltrexone) during the remission phase induces reinstatement of hyperalgesia. This indicates that the remission phase does not represent a return to the normal state, but rather an altered state in which hyperalgesia is masked by constitutive activity of opioid receptors. Importantly, stress also triggers reinstatement. Here we describe in detail procedures for inducing and following latent sensitization in its different phases in rats and mice. Copyright © 2015 John Wiley & Sons, Inc.

  5. Sensitivity of solid rocket propellants for card gap test

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Eishu; Oyumi, Yoshio (Japan Defense Agency, Tokyo (Japan). Technical Research and Development Inst.)

    1999-05-01

    Card gap test, which is standardized in Japan Explosives Society, was modified in order to apply it to solid rocket propellants and carried out to evaluate sensitivities against shock stimuli. Solid propellants tested here were mainly azide polymer composite propellants, which contained ammonium nitrate (AN) as a main oxidizer. Double base propellant, composed nitroglycerin and nitrocellulose (NC), and ammonium perchlorate (AP)-based composite propellants. It is found that the sensitivity was dominated by the oxidizer characteristics. AP- and AN-based propellant had less sensitivity and HMX-based propellant showed higher sensitivity, and the adding of NC and TMETN contributed to worse sensitive for the card gap test. Good relationship was obtained between the card gap sensitivity and the oxygen balance of propellants tested here. (orig.)

  6. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  7. Precipitates/Salts Model Sensitivity Calculation

    Energy Technology Data Exchange (ETDEWEB)

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  8. 78 FR 68076 - Request for Information on Alternative Skin Sensitization Test Methods and Testing Strategies and...

    Science.gov (United States)

    2013-11-13

    ... workers and consumers exposed to skin-sensitizing chemicals and products. Pesticides and other marketed... relationship (SAR) models to predict skin sensitization. NICEATM collaboration with industry scientists to... sensitization. Participating in validation management groups sponsored by ICATM partner organizations to...

  9. On the use of sensitivity tests in seismic tomography

    Science.gov (United States)

    Rawlinson, N.; Spakman, W.

    2016-05-01

    Sensitivity analysis with synthetic models is widely used in seismic tomography as a means for assessing the spatial resolution of solutions produced by, in most cases, linear or iterative nonlinear inversion schemes. The most common type of synthetic reconstruction test is the so-called checkerboard resolution test in which the synthetic model comprises an alternating pattern of higher and lower wave speed (or some other seismic property such as attenuation) in 2-D or 3-D. Although originally introduced for application to large inverse problems for which formal resolution and covariance could not be computed, these tests have achieved popularity, even when resolution and covariance can be computed, by virtue of being simple to implement and providing rapid and intuitive insight into the reliability of the recovered model. However, checkerboard tests have a number of potential drawbacks, including (1) only providing indirect evidence of quantitative measures of reliability such as resolution and uncertainty, (2) giving a potentially misleading impression of the range of scale-lengths that can be resolved, and (3) not giving a true picture of the structural distortion or smearing that can be caused by the data coverage. The widespread use of synthetic reconstruction tests in seismic tomography is likely to continue for some time yet, so it is important to implement best practice where possible. The goal of this paper is to develop the underlying theory and carry out a series of numerical experiments in order to establish best practice and identify some common pitfalls. Based on our findings, we recommend (1) the use of a discrete spike test involving a sparse distribution of spikes, rather than the use of the conventional tightly spaced checkerboard; (2) using data coverage (e.g. ray-path geometry) inherited from the model constrained by the observations (i.e. the same forward operator or matrix), rather than the data coverage obtained by solving the forward problem

  10. Sensitivity Tests for Cumulative Damage Function (CDF) for the PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chiwoong; Ha, Kwiseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    A safety analysis including the design basis and beyond design basis events has been conducted using MARS-LMR. Previous safety limits were based on temperature and the duration time. However, the cumulative damage function (CDF) will be used as the safety limit to evaluate the fuel cladding integrity. Recently, a 4S reactor developed by Toshiba used the same approach for a safety analysis. Therefore, the development a CDF is necessary to evaluate the safety limit for the PGSFR safety analyses. The major keys in the CDF model are behavior of fuel and cladding. It is not easy to obtain a metallic fuel database for a CDF model including the cladding materials. Argonne National Laboratory (ANL) in the United States is the only major leading group for metallic fuel experiments. They conducted various experiments with various facilities and experimental reactors, for example, EBR-II, FFTF, and TREAT. In addition, they have recently been trying to extend their oxide fuel based a severe accident code, SAS4A/SASSYS, to a metallic fuel version using their metallic fuel database. In this study, the preliminary CDF model was supplemented in the MARS-LMR code. The major source was the SAS4A/SASSYS modules related to fuel and cladding transient behaviors.. In addition, a sensitivity test for some parameters in the CDF model was conducted to evaluate the capability of these models and to find the major parameter of fuel failure. The Cumulative Damage Function is a good indicator for a fuel failure. The major parameters for the CDF model are selected including cladding and fuel temperatures, initial pressure and volume in the gas plenum, clad thickness, and fission power in the fuel pin. The most sensitive parameter is the cladding temperature. Also, cladding thickness and gas pressure in the fuel pin are effective parameters on the CDF. During an actual transient, various parameter including sensitivity test parameters in this study will be changed simultaneously. This study can

  11. A Modified Sensitive Driving Cellular Automaton Model

    Institute of Scientific and Technical Information of China (English)

    GE Hong-Xia; DAI Shi-Qiang; DONG Li-Yun; LEI Li

    2005-01-01

    A modified cellular automaton model for traffic flow on highway is proposed with a novel concept about the variable security gap. The concept is first introduced into the original Nagel-Schreckenberg model, which is called the non-sensitive driving cellular automaton model. And then it is incorporated with a sensitive driving NaSch model,in which the randomization brake is arranged before the deterministic deceleration. A parameter related to the variable security gap is determined through simulation. Comparison of the simulation results indicates that the variable security gap has different influence on the two models. The fundamental diagram obtained by simulation with the modified sensitive driving NaSch model shows that the maximumflow are in good agreement with the observed data, indicating that the presented model is more reasonable and realistic.

  12. Finite element model of needle electrode sensitivity

    Science.gov (United States)

    Høyum, P.; Kalvøy, H.; Martinsen, Ø. G.; Grimnes, S.

    2010-04-01

    We used the Finite Element (FE) Method to estimate the sensitivity of a needle electrode for bioimpedance measurement. This current conducting needle with insulated shaft was inserted in a saline solution and current was measured at the neutral electrode. FE model resistance and reactance were calculated and successfully compared with measurements on a laboratory model. The sensitivity field was described graphically based on these FE simulations.

  13. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  14. Interpreting IgE sensitization tests in food allergy.

    Science.gov (United States)

    Chokshi, Niti Y; Sicherer, Scott H

    2016-01-01

    Food allergies are increasing in prevalence, and with it, IgE testing to foods is becoming more commonplace. Food-specific IgE tests, including serum assays and prick skin tests, are sensitive for detecting the presence of food-specific IgE (sensitization), but specificity for predicting clinical allergy is limited. Therefore, positive tests are generally not, in isolation, diagnostic of clinical disease. However, rationale test selection and interpretation, based on clinical history and understanding of food allergy epidemiology and pathophysiology, makes these tests invaluable. Additionally, there exist highly predictive test cutoff values for common allergens in atopic children. Newer testing methodologies, such as component resolved diagnostics, are promising for increasing the utility of testing. This review highlights the use of IgE serum tests in the diagnosis of food allergy.

  15. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  16. The Development and Validation of the Vocalic Sensitivity Test.

    Science.gov (United States)

    Villaume, William A.; Brown, Mary Helen

    1999-01-01

    Notes that presbycusis, hearing loss associated with aging, may be marked by a second dimension of hearing loss, a loss in vocalic sensitivity. Reports on the development of the Vocalic Sensitivity Test, which controls for the verbal elements in speech while also allowing for the vocalics to exercise their normal metacommunicative function of…

  17. Stressful events and psychological difficulties : Testing alternative candidates for sensitivity

    NARCIS (Netherlands)

    Laceulle, Odilia M.|info:eu-repo/dai/nl/364227885; O'Donnell, Kieran; Glover, Vivette; O'Connor, Thomas G.; Ormel, Johan; Van Aken, Marcel A G|info:eu-repo/dai/nl/081831218; Nederhof, Esther

    2014-01-01

    The current study investigated the longitudinal, reciprocal associations between stressful events and psychological difficulties from early childhood to mid-adolescence. Child age, sex, prenatal maternal anxiety, and difficult temperament were tested as sources of sensitivity, that is, factors that

  18. Stressful events and psychological difficulties : testing alternative candidates for sensitivity

    NARCIS (Netherlands)

    Laceulle, Odilia M.; O'Donnell, Kieran; Glover, Vivette; O'Connor, Thomas G.; Ormel, Johan; van Aken, Marcel A. G.; Nederhof, Esther

    2014-01-01

    The current study investigated the longitudinal, reciprocal associations between stressful events and psychological difficulties from early childhood to mid-adolescence. Child age, sex, prenatal maternal anxiety, and difficult temperament were tested as sources of sensitivity, that is, factors that

  19. LLNL small-scale drop-hammer impact sensitivity test

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, L.R.; Foltz, M.F.

    1995-01-01

    Small-scale safety testing of explosives and other energetic materials is done to determine their sensitivity to various stimuli including friction, static spark, and impact. This testing is typically done to discover potential handling problems for either newly synthesized materials of unknown behavior or materials that have been stored for long periods of time. This report describes the existing ``ERL Type 12 Drop Weight Impact Sensitivity Apparatus``, or ``Drop Hammer Machine``, and the methods used to determine the impact sensitivity of energetic materials, Also discussed are changes made to both the machine and methods since the inception of impact sensitivity testing at LLNL in 1956. The accumulated data for the materials tested in not listed here, the exception being the discussion of those specific materials (primary calibrants: PETN, RDX, Comp-B3,and TNT; secondary calibrants: K-6, RX-26-AF, and TATB) used to calibrate the machine.

  20. Testing the Perturbation Sensitivity of Abortion-Crime Regressions

    Directory of Open Access Journals (Sweden)

    Michał Brzeziński

    2012-06-01

    Full Text Available The hypothesis that the legalisation of abortion contributed significantly to the reduction of crime in the United States in 1990s is one of the most prominent ideas from the recent “economics-made-fun” movement sparked by the book Freakonomics. This paper expands on the existing literature about the computational stability of abortion-crime regressions by testing the sensitivity of coefficients’ estimates to small amounts of data perturbation. In contrast to previous studies, we use a new data set on crime correlates for each of the US states, the original model specifica-tion and estimation methodology, and an improved data perturbation algorithm. We find that the coefficients’ estimates in abortion-crime regressions are not computationally stable and, therefore, are unreliable.

  1. Sensitivity Analysis for DHRS Heat Exchanger Performance Tests of PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jonggan; Eoh, Jaehyuk; Kim, Dehee; Lee, Taeho; Jeong, Jiyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The STELLA-1 facility has been constructed and separate effect tests of heat exchangers for DHRS are going to be conducted. Two kinds of heat exchangers including DHX (shell-and-tube sodium-to-sodium heat exchanger) and AHX (helical-tube sodium-to-air heat exchanger) will be tested for design codes V and V. Main test points are a design point and a plant normal operation point of each heat exchanger. Additionally, some plant transient conditions are taken into account for establishing a test condition set. To choose the plant transient test conditions, a sensitivity analysis has been conducted using the design codes for each heat exchanger. The sensitivity of the PGSFR DHRS heat exchanger tests (the DHX and AHX in the STELLA-1 facility) has been analyzed through a parametric study using the design codes SHXSA and AHXSA at the design point and the plant normal operation point. The DHX heat transfer performance was sensitive to the change in the inlet temperature of the shell-side and the AHX heat transfer performance was sensitive to the change in the inlet temperature of the tube side. The results of this work will contribute to an improvement of the test matrix for the separate effect test of each heat exchanger.

  2. Advances in techniques of testing mycobacterial drug sensitivity, and the use of sensitivity tests in tuberculosis control programmes

    Science.gov (United States)

    Canetti, G.; Fox, Wallace; Khomenko, A.; Mahler, H. T.; Menon, N. K.; Mitchison, D. A.; Rist, N.; Šmelev, N. A.

    1969-01-01

    In a paper arising out of an informal international consultation of specialists in the bacteriology of tuberculosis held in 1961, an attempt was made to formulate criteria, and specify technical procedures, for reliable tests of sensitivity (the absolute-concentration method, the resistance-ratio method and the proportion method) to the 3 main antituberculosis drugs (isoniazid, streptomycin and p-aminosalicylic acid). Seven years later, a further consultation was held to review the latest developments in the field and to suggest how sensitivity tests might be put to practical use in tuberculosis control programmes. The participants reached agreement on how to define drug sensitivity and resistance, and stressed the importance of using a discrimination approach to the calibration of sensitivity tests. Their views are contained in the present paper, which also includes descriptions of the sensitivity tests used by the Medical Research Council of Great Britain for first- and second-line drugs (minimal inhibitory concentration and resistance-ratio methods), the two main variants of the proportion method developed by the Institut Pasteur, Paris, and a method for calibrating sensitivity tests. PMID:5309084

  3. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    Model-driven development strives to use formal artifacts during the development process. Formal artifacts enables automatic analyses of some aspects of the system under development. This serves to increase the understanding of the (intended) behavior of the system as well as increasing error...... detection and pushing error detection to earlier stages of development. The complexity of modeling and the size of systems which can be analyzed is severely limited when introducing data variables. The state space grows exponentially in the number of variable and the domain size of the variables...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...

  4. Sensitivity study on hydraulic well testing inversion using simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.

  5. Method of Testing the Flyer Sensitivity of Explosives

    Institute of Scientific and Technical Information of China (English)

    王桂吉; 赵剑衡

    2004-01-01

    By means of Mylar flyer shock explosives driven by electric gun, the method of testing the flyer initiation sensitivity of explosives is studied, and some experiments are done. The experimental results show that the test method established is correct, which is very important and instructive to study and evaluate the safety and reliability of explosives. For the moment, the test should be researched and discussed further.

  6. Recent tests of realistic models

    Energy Technology Data Exchange (ETDEWEB)

    Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco; Gramegna, Marco; Piacentini, Fabrizio; Schettini, Valentina; Traina, Paolo, E-mail: m.genovese@inrim.i [Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, 10135 Torino (Italy)

    2009-06-01

    In this article we present recent activity of our laboratories on testing specific hidden variable models and in particular we discuss the realizations of Alicki - van Ryn test and tests of SED and of Santos' models.

  7. Pressure-Sensitive Paints Advance Rotorcraft Design Testing

    Science.gov (United States)

    2013-01-01

    The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure

  8. Establishing relative sensitivities of various toxicity testing organisms to ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Karle, L.M.; Mayhew, H.L.; Barrows, M.E.; Karls, R.K. [Battelle/Marine Sciences Lab., Sequim, WA (United States)

    1994-12-31

    The toxicity of ammonia to various organisms was examined to develop a baseline for mortality in several commonly used testing species. This baseline data will assist in choosing the proper test species and in interpreting results as they pertain to ammonia. Responses for two juvenile fish species, three marine amphipods, and two species of mysid shrimp were compared for their sensitivity to levels of ammonia. All mortality caused by ammonia in the bottom-dwelling Citharichthys stigmaeus occurred within 24 h of exposure, whereas mortality in the silverside, Menidia beryllina, occurred over the entire 96-h test duration. Responses to ammonia varied among the amphipods Rhepoxynius abronius, Ampelisca abdita, and Eohaustorius estuarius. R. abronius and A. abdita showed similar sensitivity to ammonia at lower concentrations; A. abdita appeared more sensitive than R. abronius at levels above 40 mg/L. Concentrations of ammonia required to produce significant mortality in the amphipod E. estuarius were far higher than the other species examined (> 100 mg/L NH{sub 3}). A comparison of ammonia toxicity with two commonly used invertebrates, Holmesimysis sculpts and Mysidopsis bahia, suggest that these two species of mysid have similar sensitivities to ammonia. Further studies with ammonia that examine sensitivity of different organisms should be conducted to assist regulatory and environmental agencies in determining appropriate test species and in interpreting toxicological results as they may be affected by levels of ammonia.

  9. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...

  10. Sensitivity analysis of a sound absorption model with correlated inputs

    Science.gov (United States)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  11. Sensitivity Test of Parameters Influencing Flood Hydrograph Routing with a Diffusion-Wave Distributed using Distributed Hydrological Model, Wet Spa, in Ziarat Watershed

    Directory of Open Access Journals (Sweden)

    narges javidan

    2017-02-01

    Full Text Available Introduction: Flood routing is a procedure to calculate flood stage and water depth along a river or to estimate flood hydrograph at river downstream or at reservoir outlets using the upstream hydrography . In river basins, excess rainfall is routed to the basin outlet using flow routing techniques to generate flow hydrograph. A GIS-based distributed hydrological model, Wet Spa, has been under development suitable for flood prediction and watershed management on a catchment scale. The model predicts outflow hydrographs at the basin outlet or at any converging point in the watershed, and it does so in a user-specified time step. The model is physically based, spatially distributed and time-continuous, and simulates hydrological processes of precipitation, snowmelt, interception, depression, surface runoff, infiltration, evapotranspiration, percolation, interflow, groundwater flow, etc. continuously both in time and space, for which the water and energy balance are maintained on each raster cell. Surface runoff is produced using a modified coefficient method based on the cellular characteristics of slope, land use, and soil type, and allowed to vary with soil moisture, rainfall intensity and storm duration. Interflow is computed based on the Darcy’s law and the kinematic approximation as a function of the effective hydraulic conductivity and the hydraulic gradient, while groundwater flow is estimated with a linear reservoir method on a small subcatchment scale as a function of groundwater storage and a recession coefficient. Special emphasis is given to the overland flow and channel flow routing using the method of linear diffusive wave approximation, which is capable to predict flow discharge at any converging point downstream by a unit response function. The model accounts for spatially distributed hydrological and geophysical characteristics of the catchment. Determination of the river flow hydrograph is a main target in hydrology

  12. [Detection of cancer, sensitivity of the test and sensitivity of the screening program].

    Science.gov (United States)

    Launoy, G; Duffy, S W; Prevost, T C; Bouvier, V

    1998-11-01

    In assessment of screening for cancer, no distinction is usually made between the sensitivity of the screening test (St) and the sensitivity of the screening program (Sp). This paper was aimed to distinguish meaning, method for assessment and interest for each of them, and to determine their relationship. Sensitivity of the screening program can be directly assessed with data from on-going trials whilst assessment of sensitivity of screening test requires modelisation techniques, especially for assessing the mean duration of the preclinical phase of cancer. Assuming an exponential distribution of this duration, lambda as the time parameter, a mathematical relation between St and Sp is suggested as follows: [formula: see text] with r being the interval between two screening tests. The implementation of this equation with data from a mass-screening program for colorectal cancer in the department of Calvados allowed us to investigate the influence of the mean preclinical phase and the interval between two screening tests on the value of the sensitivity of the screening procedure. Such a modelisation could be useful in the development of a rational screening policy.

  13. Sensitivity analysis of periodic matrix population models.

    Science.gov (United States)

    Caswell, Hal; Shyu, Esther

    2012-12-01

    Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments.

  14. Beta blockers and the sensitivity of the thallium treadmill test

    Energy Technology Data Exchange (ETDEWEB)

    Martin, G.J.; Henkin, R.E.; Scanlon, P.J.

    1987-09-01

    The effect beta blockers (BB) may have on the sensitivity of the thallium treadmill test (Th-TMT) is controversial. The purpose of this study was to test the hypothesis that BB decrease the sensitivity of the Th-TMT. Two hundred three patients over a two-year period were identified who satisfied the following criteria. All had symptom-limited upright treadmill exercise tests with stress and redistribution thallium imaging, as well as coronary angiography within two months of the Th-TMT. Of 58 patients with CAD not on BB, 52 had an abnormal Th-TMT scan (sensitivity 90 percent). In comparison, the sensitivity of the Th-TMT scan in the 88 patients with CAD receiving BB was 76 percent (p less than 0.05). We conclude that BB may significantly decrease the sensitivity of the Th-TMT. Physicians should fully appreciate the higher false negative rate (24 vs 10 percent) for patients on BB and consider cautious withdrawal prior to diagnostic studies.

  15. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  16. Uncertainty and Sensitivity in Surface Dynamics Modeling

    Science.gov (United States)

    Kettner, Albert J.; Syvitski, James P. M.

    2016-05-01

    Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.

  17. LLNL Small-Scale Friction sensitivity (BAM) Test

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, L.R.; Foltz, M.F.

    1996-06-01

    Small-scale safety testing of explosives, propellants and other energetic materials, is done to determine their sensitivity to various stimuli including friction, static spark, and impact. Testing is done to discover potential handling problems for either newly synthesized materials of unknown behavior, or materials that have been stored for long periods of time. This report describes the existing {open_quotes}BAM{close_quotes} Small-Scale Friction Test, and the methods used to determine the friction sensitivity pertinent to handling energetic materials. The accumulated data for the materials tested is not listed here - that information is in a database. Included is, however, a short list of (1) materials that had an unusual response, and (2), a few {open_quotes}standard{close_quotes} materials representing the range of typical responses usually seen.

  18. Sensitivity Test of Parameters Influencing Flood Hydrograph Routing with a Diffusion-Wave Distributed using Distributed Hydrological Model, Wet Spa, in Ziarat Watershed

    OpenAIRE

    narges javidan; Abdolreza Bahremand

    2017-01-01

    Introduction: Flood routing is a procedure to calculate flood stage and water depth along a river or to estimate flood hydrograph at river downstream or at reservoir outlets using the upstream hydrography . In river basins, excess rainfall is routed to the basin outlet using flow routing techniques to generate flow hydrograph. A GIS-based distributed hydrological model, Wet Spa, has been under development suitable for flood prediction and watershed management on a catchment scale. The mo...

  19. New test and analysis of position-sensitive-silicon-detector

    Institute of Scientific and Technical Information of China (English)

    FENG Lang; GE Vu-Cheng; WANG He; FAN Feng-Ying; QIAO Rui; LU Fei; SONG Yu-Shou; ZHENG Tao; YE Yan-Lin

    2009-01-01

    We have tested and analyzed the properties of two-dimensional Position-Sensitive-silicon-Detector (PSD) with new integrated preamplifiers.The test demonstrates that the best position resolution for 5.5 MeV α particles is 1.7 mm (FWHM),and the best energy resolution is 2.1%,which are notably better than the previously reported results.A scaling formula is introduced to make the absolute position calibration.

  20. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  1. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  2. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  3. Sensitivity of Footbridge Response to Load Modeling

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many desig...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....

  4. Sensitivity of Footbridge Response to Load Modeling

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2012-01-01

    The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many design...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....

  5. Sensitivity of Footbridge Response to Load Modeling

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many design...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....

  6. Can Artemia Hatching Assay Be a (Sensitive) Alternative Tool to Acute Toxicity Test?

    Science.gov (United States)

    Rotini, A; Manfra, L; Canepa, S; Tornambè, A; Migliore, L

    2015-12-01

    Artemia sp. is extensively used in ecotoxicity testing, despite criticisms inherent to both acute and long-term tests. Alternative endpoints and procedures should be considered to support the use of this biological model. The hatching process comprises several developmental steps and the cyst hatchability seems acceptable as endpoint criterion. In this study, we assessed the reliability of the hatching assay on A. franciscana by comparing with acute and long-term mortality tests, using two chemicals: Diethylene Glycol (DEG), Sodium Dodecyl Sulphate (SDS). Both DEG and SDS tests demonstrated a dose dependent hatching inhibition. The hatching test resulted more sensitive than acute mortality test and less sensitive than the long-term one. Results demonstrate the reliability and high sensitivity of this hatching assay on a short time lag and support its useful application in first-tier risk assessment procedures.

  7. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  8. Model-Based Security Testing

    CERN Document Server

    Schieferdecker, Ina; Schneider, Martin; 10.4204/EPTCS.80.1

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing,...

  9. Stressful events and psychological difficulties: testing alternative candidates for sensitivity.

    Science.gov (United States)

    Laceulle, Odilia M; O'Donnell, Kieran; Glover, Vivette; O'Connor, Thomas G; Ormel, Johan; van Aken, Marcel A G; Nederhof, Esther

    2014-02-01

    The current study investigated the longitudinal, reciprocal associations between stressful events and psychological difficulties from early childhood to mid-adolescence. Child age, sex, prenatal maternal anxiety, and difficult temperament were tested as sources of sensitivity, that is, factors that may make children more sensitive to stressful life events. Analyses were based on data from 10,417 children from a prospective, longitudinal study of child development. At ages 4, 7, 9, 11, and 16 years, stressful events and psychological difficulties were measured. Prenatal anxiety was measured at 32 weeks of gestation and difficult temperament was measured at 6 months. Children exposed to stressful events showed significantly increased psychological difficulties at ages 7 and 11 years; there was consistent evidence of a reciprocal pattern: psychological difficulties predicted stressful events at each stage. Analyses also indicated that the associations between stressful events and psychological difficulties were stronger in girls than in boys. We found no evidence for the hypothesis that prenatal anxiety or difficult temperament increased stress sensitivity, that is, moderated the link between life events and psychological difficulties. The findings extend prior work on stress exposure and psychological difficulties and highlight the need for additional research to investigate sources of sensitivity and the mechanisms that might underlie differences in sensitivity to stressful events.

  10. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  11. Testing the Perturbation Sensitivity of Abortion-Crime Regressions

    OpenAIRE

    2012-01-01

    The hypothesis that the legalisation of abortion contributed significantly to the reduction of crime in the United States in 1990s is one of the most prominent ideas from the recent 'economics-made-fun' movement sparked by the book Freakonomics. This paper expands on the existing literature about the computational stability of abortion-crime regressions by testing the sensitivity of coefficients' estimates to small amounts of data perturbation. In contrast to previous studies, we use a new da...

  12. Ship Model Testing

    Science.gov (United States)

    2016-01-15

    analyzer, dual fuel, material tester, universal tester, laser scanner and 3D printer 16. SECURITY CLASSIFICATION OF: a. REPORT b. ABSTRACT c...New Additions • New material testing machine with environmental chamber • New dual -fuel test bed for Haeberle Laboratory • Upgrade existing...of purchasing more data acquisition equipment (ie. FARO laser scanner, data telemetry , and velocity profiler). Table 1: Spending vs. budget

  13. Testing and validating environmental models

    Science.gov (United States)

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  14. Establishment of a sensitized canine model for kidney transplantation

    Institute of Scientific and Technical Information of China (English)

    XIE Sen; XIA Sui-sheng; TANG Li-gong; CHENG Jun; CHEN Zhi-shui; ZHENG Shan-gen

    2005-01-01

    Objective:To establish a sensitized canine model for kidney transplantation. Methods:12 male dogs were averagely grouped as donors and recipients. A small number of donor canine lymphocytes was infused into different anatomic locations of a paired canine recipient for each time and which was repeated weekly. Specific immune sensitization was monitored by means of Complement Dependent Cytotoxicity (CDC) and Mixed Lymphocyte Culture (MLC) test. When CDC test conversed to be positive and MLC test showed a significant proliferation of reactive lymphocytes of canine recipients, the right kidneys of the paired dogs were excised and transplanted to each other concurrently. Injury of renal allograft function was scheduled determined by ECT dynamic kidney photography and pathologic investigation. Results :CDC test usually conversed to be positive and reactive lymphocytes of canine recipients were also observed to be proliferated significantly in MLC test after 3 to 4 times of canine donor lymphocyte infusions. Renal allograft function deterioration occurred 4 d post-operatively in 4 of 6 canine recipients, in contrast to none in control dogs. Pathologic changes suggested antibody-mediated rejection (delayed) or acute rejection in 3 excised renal allograft of sensitized dogs. Seven days after operation, all sensitized dogs had lost graft function, pathologic changes of which showed that the renal allografts were seriously rejected. 2 of 3 dogs in control group were also acutely rejected. Conclusion:A convenient method by means of repeated stimulation of canine lymphocyte may induce specific immune sensitization in canine recipients. Renal allografts in sensitized dogs will be earlier rejected and result in a more deteriorated graft function.

  15. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  16. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  17. Analysis of sensitivity and errors in Maglev vibration test system

    Institute of Scientific and Technical Information of China (English)

    JIANG; Dong; LIU; Xukun; WANG; Deyu; YANG; Jiaxiang

    2016-01-01

    In order to improve work performance of M aglev vibration test systems,the relationships of operating parameters between different components and system were researched. The working principle of photoelectric displacement sensor was analyzed. The relationship between displacement of transducer and the infrared light area received by sensor was given. The method of expanding the dynamic range of vibrator was proposed,which makes dynamic range of Maglev vibrator doubled. By increasing the amplification of the amplifier,the sensitive photoelectric displacement sensor can be maintained. Two modes of operation of the controller were analyzed. Bilateral work of vibration test system designed can further improve the stability of the system.An object vibration was measured by Maglev vibration test system designed when different vibration exciter frequencies were loaded. Experiments showthat the output frequency measured by Maglev vibration test system and loaded are the same. Finally,the errors of test system were analyzed. These errors of vibration test system designed can meet the requirements of application. The results laid the foundation for the practical application of magnetic levitation vibration test system.

  18. Oblique impact sensitivity of explosives: The skid test the snatch friction sensitivity test. Quarterly report, April--June 1964

    Energy Technology Data Exchange (ETDEWEB)

    Akst, I.B.; Washburn, B.M.; Rigdon, J.K.

    1997-09-01

    The oblique impact sensitivity of UK-UK-simulated HMX in 85 to 90% formulation with Viton is not enough lower, if any, to encourage richer formulations or change to Bridgewater processes for this reason alone. Fifty-pound cyclotol 75/25 hemispheres gave moderate reactions (No. 4) as low as 3.5 foot (14{degrees}); lower tests have not been performed yet. {open_quotes}Reduced-H.E.{close_quotes} pieces of PBX 9404, 2, 3, 4, and 5 inches thick, respectively, were tested at 1.75 foot (14{degrees}) resulting in a 6 reaction for the 5 inches thick piece while the remaining three pieces gave 0 reactions.

  19. Direct Sensitivity Test of the MB/BacT System

    Directory of Open Access Journals (Sweden)

    Barreto Angela Maria Werneck

    2002-01-01

    Full Text Available In order to evaluate the direct-method test of sensitivity to drugs used in the principal tuberculosis treatment regimes, in the Organon Teknika MB/BacT system, we tested 50 sputum samples positive to microscopy taken from patients with pulmonary tuberculosis and with clinical indications for an antibiogram, admitted sequentially for examination during the routine of the reference laboratory. The material was treated v/v with 23% trisodium phosphate solution, incubated for 24 h at 35°C, and neutralized v/v with 20% monosodium phosphate solution. The material was then centrifuged and the sediment inoculated into flasks containing Rifampin - 2 µg/ml, Isoniazid - 0.2 µg/ml, Pyrazinamide - 100 µg/ml, Ethambutol - 2.5 µg/ml, Ethionamide - 1.25 µg/ml, and Streptomycin - 2 µg/ml. The tests were evaluated using the indirect method in the BACTEC 460 TB (Becton Dickinson system as the gold standard. The results showed that the Rifampin test performed best, i.e., 100% sensitivity at 95% Confidence Interval (82.2-100 and 100% specificity at 95% Confidence Interval (84.5-100, followed by Isoniazid and Pyrazinamide. In this experiment, 92% of the materials showed a final reading in 30 days; this period represents the time for primary isolation as well as the results of the sensitivity profile, and is within Centers for Disease Control and Prevention recommendations regarding time for performance of the antibiogram. The inoculated flasks showed no contamination during the experiment. The MB/BacT is shown to be a reliable, rapid, fully automated nonradiometric system for the tuberculosis antibiogram.

  20. Sensitivity Study of Stochastic Walking Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2010-01-01

    On flexible structures such as footbridges and long-span floors, walking loads may generate excessive structural vibrations and serviceability problems. The problem is increasing because of the growing tendency to employ long spans in structural design. In many design codes, the vibration...... serviceability limit state is assessed using a walking load model in which the walking parameters are modelled deterministically. However, the walking parameters are stochastic (for instance the weight of the pedestrian is not likely to be the same for every footbridge crossing), and a natural way forward...... investigates whether statistical distributions of bridge response are sensitive to some of the decisions made by the engineer doing the analyses. For the paper a selected part of potential influences are examined and footbridge responses are extracted using Monte-Carlo simulations and focus is on estimating...

  1. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    Science.gov (United States)

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  2. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different

  3. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    Science.gov (United States)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  4. Animal models to study gluten sensitivity.

    Science.gov (United States)

    Marietta, Eric V; Murray, Joseph A

    2012-07-01

    The initial development and maintenance of tolerance to dietary antigens is a complex process that, when prevented or interrupted, can lead to human disease. Understanding the mechanisms by which tolerance to specific dietary antigens is attained and maintained is crucial to our understanding of the pathogenesis of diseases related to intolerance of specific dietary antigens. Two diseases that are the result of intolerance to a dietary antigen are celiac disease (CD) and dermatitis herpetiformis (DH). Both of these diseases are dependent upon the ingestion of gluten (the protein fraction of wheat, rye, and barley) and manifest in the gastrointestinal tract and skin, respectively. These gluten-sensitive diseases are two examples of how devastating abnormal immune responses to a ubiquitous food can be. The well-recognized risk genotype for both is conferred by either of the HLA class II molecules DQ2 or DQ8. However, only a minority of individuals who carry these molecules will develop either disease. Also of interest is that the age at diagnosis can range from infancy to 70-80 years of age. This would indicate that intolerance to gluten may potentially be the result of two different phenomena. The first would be that, for various reasons, tolerance to gluten never developed in certain individuals, but that for other individuals, prior tolerance to gluten was lost at some point after childhood. Of recent interest is the concept of non-celiac gluten sensitivity, which manifests as chronic digestive or neurologic symptoms due to gluten, but through mechanisms that remain to be elucidated. This review will address how animal models of gluten-sensitive disorders have substantially contributed to a better understanding of how gluten intolerance can arise and cause disease.

  5. Animal Models to Study Gluten Sensitivity1

    Science.gov (United States)

    Marietta, Eric V.; Murray, Joseph A.

    2012-01-01

    The initial development and maintenance of tolerance to dietary antigens is a complex process that, when prevented or interrupted, can lead to human disease. Understanding the mechanisms by which tolerance to specific dietary antigens is attained and maintained is crucial to our understanding of the pathogenesis of diseases related to intolerance of specific dietary antigens. Two diseases that are the result of intolerance to a dietary antigen are celiac disease (CD) and dermatitis herpetiformis (DH). Both of these diseases are dependent upon the ingestion of gluten (the protein fraction of wheat, rye, and barley) and manifest in the gastrointestinal tract and skin, respectively. These gluten-sensitive diseases are two examples of how devastating abnormal immune responses to a ubiquitous food can be. The well-recognized risk genotype for both is conferred by either of the HLA class II molecules DQ2 or DQ8. However, only a minority of individuals who carry these molecules will develop either disease. Also of interest is that the age at diagnosis can range from infancy to 70–80 years of age. This would indicate that intolerance to gluten may potentially be the result of two different phenomena. The first would be that, for various reasons, tolerance to gluten never developed in certain individuals, but that for other individuals, prior tolerance to gluten was lost at some point after childhood. Of recent interest is the concept of non-celiac gluten sensitivity, which manifests as chronic digestive or neurologic symptoms due to gluten, but through mechanisms that remain to be elucidated. This review will address how animal models of gluten-sensitive disorders have substantially contributed to a better understanding of how gluten intolerance can arise and cause disease. PMID:22572887

  6. Model testing of Wave Dragon

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    Previous to this project a scale model 1:50 of the wave energy converter (WEC) Wave Dragon was built by the Danish Maritime Institute and tested in a wave tank at Aalborg University (AAU). The test programs investigated the movements of the floating structure, mooring forces and forces in the reflectors. The first test was followed by test establishing the efficiency in different sea states. The scale model has also been extensively tested in the EU Joule Craft project JOR-CT98-7027 (Low-Pressure Turbine and Control Equipment for Wave Energy Converters /Wave Dragon) at University College Cork, Hydraulics and Maritime Research Centre, Ireland. The results of the previous model tests have formed the basis for a redesign of the WEC. In this project a reconstruction of the scale 1:50 model and sequential tests of changes to the model geometry and mass distribution parameters will be performed. AAU will make the modifications to the model based on the revised Loewenmark design and perform the tests in their wave tank. Grid connection requirements have been established. A hydro turbine with no movable parts besides the rotor has been developed and a scale model 1:3.5 tested, with a high efficiency over the whole head range. The turbine itself has possibilities for being used in river systems with low head and variable flow, an area of interest for many countries around the world. Finally, a regulation strategy for the turbines has been developed, which is essential for the future deployment of Wave Dragon.The video includes the following: 1. Title, 2. Introduction of the Wave Dragon, 3. Model test series H, Hs = 3 m, Rc = 3 m, 4. Model test series H, Hs = 5 m, Rc = 4 m, 5. Model test series I, Hs = 7 m, Rc = 1.25 m, 6. Model test series I, Hs = 7 m, Rc = 4 m, 7. Rolling title. On this VCD additional versions of the video can be found in the directory 'addvideo' for playing the video on PC's. These versions are: Model testing of Wave Dragon, DVD version

  7. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  8. Sensitivity of The Dynamic Visual Acuity Test To Sensorimotor Change

    Science.gov (United States)

    Cohen, Helen; Bloomberg, Jacob; Elizalde, Elizabeth; Fregia, Melody

    1999-01-01

    Post-flight astronauts, acutely post-vestibular nerve section patients, and patients with severe chronic bilateral vestibular deficits have oscillopsia caused by reduced vestibulocular reflex gains and decreased postural stability. Therefore, as previous work has shown, a test of dynamic visual acuity (DVA), in which the subject must read numbers from a computer screen while standing still or walking in place provides a composite measure of sensorimotor integration. This measure may be useful for determining the level of recovery, post-flight, post-operatively, or after vestibular rehabilitation. To determine the sensitivity of DVA to change in impaired populations we have tested patients with acoustic neuromas before and during the first post-operative week after resection of the tumors, and with bilaterally labyrinthine deficient subjects before and after six weeks of balance rehabilitation therapy.

  9. State of the art in non-animal approaches for skin sensitization testing: from individual test methods towards testing strategies.

    Science.gov (United States)

    Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J

    2016-12-01

    The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an

  10. In vitro skin irritation testing: Improving the sensitivity of the EpiDerm skin irritation test protocol.

    Science.gov (United States)

    Kandárová, Helena; Hayden, Patrick; Klausner, Mitch; Kubilus, Joseph; Kearney, Paul; Sheasgreen, John

    2009-12-01

    A skin irritation test (SIT) utilising a common protocol for two in vitro reconstructed human epidermal (RhE) models, EPISKIN and EpiDerm, was developed, optimised and evaluated as a replacement for the in vivo rabbit skin irritation test in an ECVAM-sponsored validation study. In 2007, both RhE models were recognised by an independent peer-review panel and the ECVAM Scientific Advisory Committee (ESAC) as validated for use with the common SIT protocol. The EPISKIN SIT was endorsed as a full replacement of the in vivo rabbit test. Since the EpiDerm SIT proved to be less sensitive than the in vivo test and the EPISKIN SIT, the test was recognised as a validated component of a tiered testing strategy, in which positive results are accepted and negative results require further confirmation. The ESAC, in its April 2007 statement, also recommended increasing the sensitivity of the EpiDerm SIT, in order to gain the full acceptance. Analysis of the EpiDerm and EPISKIN data from the ECVAM validation study indicated that the lower sensitivity of the EpiDerm SIT might be linked to the more robust barrier properties of the EpiDerm model. This hypothesis was also in line with results published previously. To overcome the relatively low sensitivity of the EpiDerm protocol as a hindrance to full regulatory acceptance, a modification of exposure conditions was introduced into the protocol to achieve better agreement with the in vivo rabbit data. In the Modified EpiDerm SIT protocol, the test chemical exposure time was increased from 15 minutes to 60 minutes. In addition, part of the exposure was performed at 37 degrees C. When the 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide (MTT) viability assay endpoint was used for classification, a significant increase of sensitivity was obtained (86.1%), whilst maintaining the high specificity of the method (76.3%). With the change to the EU classification system, which now uses higher cut-off for the classification of

  11. Odor-Specific Loss of Smell Sensitivity with Age as Revealed by the Specific Sensitivity Test.

    Science.gov (United States)

    Seow, Yi-Xin; Ong, Peter K C; Huang, Dejian

    2016-07-01

    The perception of odor mixtures plays an important role in human food intake, behavior, and emotions. Decline of smell acuity with normal aging could impact food perception and preferences at various ages. However, since the landmark Smell Survey by National Geographic, little has been elucidated on differences in the onset and extent of loss in olfactory sensitivity toward single odorants. Here, using the Specific Sensitivity test, we show the onset and extent of loss in both identification and detection thresholds of odorants with age are odorant-specific. Subjects of Chinese descent in Singapore (186 women, 95 men), aged 21-80 years, were assessed for olfactory sensitivity of 10 odorants from various odor groups. Notably, subjects in their 70s required 179 times concentration of rose-like odorant (2-phenylethanol) than subjects in the 20s, while thresholds for onion-like 2-methyloxolane-3-thiol only differed by 3 times between the age groups. In addition, identification rate for 2-phenylethanol was negatively correlated with age throughout adult life whereas mushroom-like oct-1-en-3-ol was equally identified by subjects across all ages. Our results demonstrated the girth of differentiated olfactory loss due to normal ageing, which potentially affect overall perception and preferences of odor mixtures with age. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Modelling survival: exposure pattern, species sensitivity and uncertainty.

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G

    2016-07-06

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  13. Modelling survival: exposure pattern, species sensitivity and uncertainty

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.

    2016-07-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  14. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  15. Testing the model for testing competency.

    Science.gov (United States)

    Keating, Sarah B; Rutledge, Dana N; Sargent, Arlene; Walker, Polly

    2003-05-01

    The pilot study to demonstrate the utility of the CBRDM in the practice setting was successful. Using a matrix evaluation tool based on the model's competencies, evaluators were able to observe specific performance behaviors of senior nursing students and new graduates at either the novice or competent levels. The study faced the usual perils of pilot studies, including small sample size, a limited number of items from the total CBRDM, restricted financial resources, inexperienced researchers, unexpected barriers, and untested evaluation tools. It was understood from the beginning of the study that the research would be based on a program evaluation model, analyzing both processes and outcomes. However, the meager data findings led to the desire to continue to study use of the model for practice setting job expectations, career planning for nurses, and curriculum development for educators. Although the California Strategic Planning Committee for Nursing no longer has funding, we hope that others interested in role differentiation issues will take the results of this study and test the model in other practice settings. Its ability to measure higher levels of competency as well as novice and competent should be studied, i.e., proficient, expert, and advanced practice. The CBRDM may be useful in evaluating student and nurse performance, defining role expectations, and identifying the preparation necessary for the roles. The initial findings related to the two functions as leader and teacher in the care provider and care coordinator roles led to much discussion about helping students and nurses develop competence. Additional discussion focused on the roles as they apply to settings such as critical care or primary health care. The model is useful for all of nursing as it continues to define its levels of practice and their relationship to on-the-job performance, curriculum development, and career planning.

  16. Sensitivity analysis of fine sediment models using heterogeneous data

    Science.gov (United States)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    model output and SPM values estimated from remote sensing was carried out. The analysis helped in identifying the optimal parameter values, and in identifying the spatial and seasonal dimension of the model error. The third phase focused on investigating the uncertainties of predictions of the numerical model. This research allowed for testing an approach with a coordinated use of available data from various sources, together with the corresponding sensitivity analysis exercise. Keywords: sediment, sensitivity analysis, Dutch coast, MODIS, remote sensing, data-model integration, uncertainty analysis.

  17. Improved environmental multimedia modeling and its sensitivity analysis.

    Science.gov (United States)

    Yuan, Jing; Elektorowicz, Maria; Chen, Zhi

    2011-01-01

    Modeling of multimedia environmental issues is extremely complex due to the intricacy of the systems with the consideration of many factors. In this study, an improved environmental multimedia modeling is developed and a number of testing problems related to it are examined and compared with each other with standard numerical and analytical methodologies. The results indicate the flux output of new model is lesser in the unsaturated zone and groundwater zone compared with the traditional environmental multimedia model. Furthermore, about 90% of the total benzene flux was distributed to the air zone from the landfill sources and only 10% of the total flux emitted into the unsaturated, groundwater zones in non-uniform conditions. This paper also includes functions of model sensitivity analysis to optimize model parameters such as Peclet number (Pe). The analyses results show that the Pe can be considered as deterministic input variables for transport output. The oscillatory behavior is eliminated with the Pe decreased. In addition, the numerical methods are more accurate than analytical methods with the Pe increased. In conclusion, the improved environmental multimedia model system and its sensitivity analysis can be used to address the complex fate and transport of the pollutants in multimedia environments and then help to manage the environmental impacts.

  18. Global sensitivity analysis on vertical model of railway vehicle based on extended Fourier amplitude sensitivity test%基于傅里叶幅值检验扩展法的轨道车辆垂向模型全局灵敏度分析

    Institute of Scientific and Technical Information of China (English)

    余衍然; 李成; 姚林泉; 朱忠奎

    2014-01-01

    In order to screen out which design parameters have significant effects on a vertical vibration model of railway vehicle,so as to simplify the parameter optimization design of the vehicle model,a global sensitivity analysis method,namely the extended Fourier amplitude sensitivity test (EFAST),was introduced and applied to the parameter sensitivity analysis of a vertical model of typical railway vehicle.The analysis results show that,the variations of the secondary suspension parameters and the primary suspension damping parameters have greater influence on the car body’s bounce motion than the primary suspension stiffness;car body’s pitch motion can be improved considerably by adjusting the length between bogie pivot centers;the interaction between design parameters in the model is only in small-scale.The research methods and conclusions proposed in the paper have a certain reference value and engineering significance in the vibration characteristics and parameter optimization design of railway vehicle model.%为筛选明显影响轨道车辆垂向振动特性的设计参数,简化车辆模型参数优化设计过程,采用傅里叶幅值灵敏度检验扩展法对典型轨道车辆垂向模型进行参数灵敏度分析。研究结果表明,二系悬挂与一系悬挂阻尼变化对车体沉浮运动影响较一系悬挂刚度大;调整车辆定距也会在相当程度上影响车体点头运动;模型设计参数间存在微弱的交互作用。所用该研究方法及结论对轨道车辆模型的振动特性研究及参数优化设计具有一定参考价值与工程意义。

  19. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-02-01

    Full Text Available Before operational use or for decision making, models must be validated, and the degree of trust in model outputs should be quantified. Often, model validation is performed at single locations due to the lack of spatially-distributed data. Since the analysis of parametric model uncertainties can be performed independently of observations, it is a suitable method to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainty of a physically-based mountain permafrost model are quantified within an artificial topography consisting of different elevations and exposures combined with six ground types characterized by their hydraulic properties. The analyses performed for all combinations of topographic factors and ground types allowed to quantify the variability of model sensitivity and uncertainty within mountain regions. We found that modeled snow duration considerably influences the mean annual ground temperature (MAGT. The melt-out day of snow (MD is determined by processes determining snow accumulation and melting. Parameters such as the temperature and precipitation lapse rate and the snow correction factor have therefore a great impact on modeled MAGT. Ground albedo changes MAGT from 0.5 to 4°C in dependence of the elevation, the aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter snow cover. Snow albedo and other parameters determining the amount of reflected solar radiation are important, changing MAGT at different depths by more than 1°C. Parameters influencing the turbulent fluxes as the roughness length or the dew temperature are more sensitive at low elevation sites due to higher air temperatures and decreased solar radiation. Modeling the individual terms of the energy

  20. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...

  1. The Hug-up Test: A New, Sensitive Diagnostic Test for Supraspinatus Tears

    Institute of Scientific and Technical Information of China (English)

    Yu-Lei Liu; Ying-Fang Ao; Hui Yan; Guo-Qing Cui

    2016-01-01

    Background:The supraspinatus tendon is the most commonly affected tendon in rotator cufftears.Early detection ofa supraspinatus tear using an accurate physical examination is,therefore,important.However,the currently used physical tests for detecting supraspinatus tears are poor diagnostic indicators and involve a wide range of sensitivity and specificity values.Therefore,the aim of this study was to establish a new physical test for the diagnosis of supraspinatus tears and evaluate its accuracy in comparison with conventional tests.Methods:Between November 2012 and January 2014,200 consecutive patients undergoing shoulder arthroscopy were prospectively evaluated preoperatively.The hug-up test,empty can (EC) test,full can (FC) test,Neer impingement sign,and Hawkins-Kennedy impingement sign were used and compared statistically for their accuracy in terms of supraspinatus tears,with arthroscopic findings as the gold standard.Muscle strength was precisely quantified using an electronic digital tensiometer.Results:The prevalence of supraspinatus tears was 76.5%.The hug-up test demonstrated the highest sensitivity (94.1%),with a low negative likelihood ratio (NLR,0.08) and comparable specificity (76.6%) compared with the other four tests.The area under the receiver operating characteristic curve for the hug-up test was 0.854,with no statistical difference compared with the EC test (z =1.43 8,P =0.075) or the FC test (z =1.498,P =0.067).The hug-up test showed no statistical difference in terms of detecting different tear patterns according to the position (x2 =0.578,P =0.898) and size (Fisher's exact test,P > 0.999) compared with the arthroscopic examination.The interobserver reproducibility of the hug-up test was high,with a kappa coefficient of 0.823.Conclusions:The hug-up test can accurately detect supraspinatus tears with a high sensitivity,comparable specificity,and low NLR compared with the conventional clinical tests and could,therefore,improve the

  2. The Hug-up Test: A New, Sensitive Diagnostic Test for Supraspinatus Tears

    Directory of Open Access Journals (Sweden)

    Yu-Lei Liu

    2016-01-01

    Full Text Available Background: The supraspinatus tendon is the most commonly affected tendon in rotator cuff tears. Early detection of a supraspinatus tear using an accurate physical examination is, therefore, important. However, the currently used physical tests for detecting supraspinatus tears are poor diagnostic indicators and involve a wide range of sensitivity and specificity values. Therefore, the aim of this study was to establish a new physical test for the diagnosis of supraspinatus tears and evaluate its accuracy in comparison with conventional tests. Methods: Between November 2012 and January 2014, 200 consecutive patients undergoing shoulder arthroscopy were prospectively evaluated preoperatively. The hug-up test, empty can (EC test, full can (FC test, Neer impingement sign, and Hawkins-Kennedy impingement sign were used and compared statistically for their accuracy in terms of supraspinatus tears, with arthroscopic findings as the gold standard. Muscle strength was precisely quantified using an electronic digital tensiometer. Results: The prevalence of supraspinatus tears was 76.5%. The hug-up test demonstrated the highest sensitivity (94.1%, with a low negative likelihood ratio (NLR, 0.08 and comparable specificity (76.6% compared with the other four tests. The area under the receiver operating characteristic curve for the hug-up test was 0.854, with no statistical difference compared with the EC test (z = 1.438, P = 0.075 or the FC test (z = 1.498, P = 0.067. The hug-up test showed no statistical difference in terms of detecting different tear patterns according to the position (χ2 = 0.578, P = 0.898 and size (Fisher′s exact test, P > 0.999 compared with the arthroscopic examination. The interobserver reproducibility of the hug-up test was high, with a kappa coefficient of 0.823. Conclusions: The hug-up test can accurately detect supraspinatus tears with a high sensitivity, comparable specificity, and low NLR compared with the conventional

  3. High Speed Pressure Sensitive Paint for Dynamic Testing

    Science.gov (United States)

    Pena, Carolina; Chism, Kyle; Hubner, Paul

    2016-11-01

    Pressure sensitive paint (PSP) allows engineers to obtain accurate, high-spatial-resolution measurements of pressure fields over a structure. The pressure is directly related to the luminescence emitted by the paint due to oxygen quenching. Fast PSP has a higher surface area due to its porosity compared to conventional PSP, which enables faster diffusion and measurements to be acquired three orders of magnitude faster than with conventional PSP. A fast time response is needed when testing vibrating structures due to fluid-structure interaction. The goal of this summer project was to set-up, test and analyze the pressure field of an impinging air jet on a vibrating cantilever beam using Fast PSP. Software routines were developed for the processing of the emission images, videos of a static beam coated with Fast PSP were acquired with the air jet on and off, and the intensities of these two cases were ratioed and calibrated to pressure. Going forward, unsteady pressures on a vibrating beam will be measured and presented. Eventually, the long-term goal is to integrate luminescent pressure and strain measurement techniques, simultaneously using Fast PSP and a luminescent photoelastic coating on vibrating structures. Funding from NSF REU site Grant EEC 1358991 is greatly appreciated.

  4. Local defect resonance for sensitive non-destructive testing

    Science.gov (United States)

    Adebahr, W.; Solodov, I.; Rahammer, M.; Gulnizkij, N.; Kreutzbruck, M.

    2016-02-01

    Ultrasonic wave-defect interaction is a background of ultrasound activated techniques for imaging and non-destructive testing (NDT) of materials and industrial components. The interaction, primarily, results in acoustic response of a defect which provides attenuation and scattering of ultrasound used as an indicator of defects in conventional ultrasonic NDT. The derivative ultrasonic-induced effects include e.g. nonlinear, thermal, acousto-optic, etc. responses also applied for NDT and defect imaging. These secondary effects are normally relatively inefficient so that the corresponding NDT techniques require an elevated acoustic power and stand out from conventional ultrasonic NDT counterparts for their specific instrumentation particularly adapted to high-power ultrasonic. In this paper, a consistent way to enhance ultrasonic, optical and thermal defect responses and thus to reduce an ultrasonic power required is suggested by using selective ultrasonic activation of defects based on the concept of local defect resonance (LDR). A strong increase in vibration amplitude at LDR enables to reliably detect and visualize the defect as soon as the driving ultrasonic frequency is matched to the LDR frequency. This also provides a high frequency selectivity of the LDR-based imaging, i.e. an opportunity of detecting a certain defect among a multitude of other defects in material. Some examples are shown how to use LDR in non-destructive testing techniques, like vibrometry, ultrasonic thermography and shearography in order to enhance the sensitivity of defect visualization.

  5. Development of an in vitro dendritic cell-based test for skin sensitizer identification.

    Science.gov (United States)

    Neves, Bruno Miguel; Rosa, Susana Carvalho; Martins, João Demétrio; Silva, Ana; Gonçalo, Margarida; Lopes, Maria Celeste; Cruz, Maria Teresa

    2013-03-18

    The sensitizing potential of chemicals is currently assessed using animal models. However, ethical and economic concerns and the recent European legislative framework triggered intensive research efforts in the development and validation of alternative methods. Therefore, the aim of this study was to develop an in vitro predictive test based on the analysis and integration of gene expression and intracellular signaling profiles of chemical-exposed skin-derived dendritic cells. Cells were treated with four known sensitizers and two nonsensitizers, and the effects on the expression of 20 candidate genes and the activation of MAPK, PI3K/Akt, and NF-κB signaling pathways were analyzed by real-time reverse transcription polymerase chain reaction and Western blotting, respectively. Genes Trxr1, Hmox1, Nqo1, and Cxcl10 and the p38 MAPK and JNK signaling pathways were identified as good predictor variables and used to construct a dichotomous classifier. For validation of the model, 12 new chemicals were then analyzed in a blind assay, and from these, 11 were correctly classified. Considering the total of 18 compounds tested here, 17 were correctly classified, representing a concordance of 94%, with a sensitivity of 92% (12 of 13 sensitizers identified) and a specificity of 100% (5 of 5 nonsensitizers identified). Additionally, we tested the ability of our model to discriminate sensitizers from nonallergenic but immunogenic compounds such as lipopolysaccharide (LPS). LPS was correctly classified as a nonsensitizer. Overall, our results indicate that the analysis of proposed gene and signaling pathway signatures in a mouse fetal skin-derived dendritic cell line represents a valuable model to be integrated in a future in vitro test platform.

  6. A discourse on sensitivity analysis for discretely-modeled structures

    Science.gov (United States)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  7. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    Science.gov (United States)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  8. Testing the recent snow drought as an analog for climate warming sensitivity of Cascades snowpacks

    Science.gov (United States)

    Cooper, Matthew G.; Nolin, Anne W.; Safeeq, Mohammad

    2016-08-01

    Record low snowpack conditions were observed at Snow Telemetry stations in the Cascades Mountains, USA during the winters of 2014 and 2015. We tested the hypothesis that these winters are analogs for the temperature sensitivity of Cascades snowpacks. In the Oregon Cascades, the 2014 and 2015 winter air temperature anomalies were approximately +2 °C and +4 °C above the climatological mean. We used a spatially distributed snowpack energy balance model to simulate the sensitivity of multiple snowpack metrics to a +2 °C and +4 °C warming and compared our modeled sensitivities to observed values during 2014 and 2015. We found that for each +1 °C warming, modeled basin-mean peak snow water equivalent (SWE) declined by 22%-30%, the date of peak SWE (DPS) advanced by 13 days, the duration of snow cover (DSC) shortened by 31-34 days, and the snow disappearance date (SDD) advanced by 22-25 days. Our hypothesis was not borne out by the observations except in the case of peak SWE; other snow metrics did not resemble predicted values based on modeled sensitivities and thus are not effective analogs of future temperature sensitivities. Rather than just temperature, it appears that the magnitude and phasing of winter precipitation events, such as large, late spring snowfall, controlled the DPS, SDD, and DSC.

  9. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  10. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  11. Sensitivity of resource selection and connectivity models to landscape definition

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2017-01-01

    Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...

  12. Assessment of wavefront aberration and contrast sensitivity test as evaluation of postoperative visual quality

    OpenAIRE

    Min Gong; Yi Liu; Bi Yang

    2013-01-01

    Effective methods of evaluating postoperative visual quality include wavefront aberration and contrast sensitivity test. This article provides a review of the concepts and clinical applications as well as their interactions of wavefront aberration and contrast sensitivity test.This article also provides a comprehensive assessment of the effectiveness of wavefront aberration and contrast sensitivity test as evaluation tools of postoperative visual quality.

  13. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...... as well as the flow pattern during discharge of the silo. During discharge a mixed flow pattern has been identified...

  14. A Workflow for Global Sensitivity Analysis of PBPK Models

    Directory of Open Access Journals (Sweden)

    Kevin eMcNally

    2011-06-01

    Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.

  15. Computational Modeling of Simulation Tests.

    Science.gov (United States)

    1980-06-01

    Mexico , March 1979. 14. Kinney, G. F.,.::. IeiN, .hoce 1h Ir, McMillan, p. 57, 1962. 15. Courant and Friedrichs, ,U: r. on moca an.: Jho...AD 79 275 NEW MEXICO UNIV ALBUGUERGUE ERIC H WANG CIVIL ENGINE-ETC F/6 18/3 COMPUTATIONAL MODELING OF SIMULATION TESTS.(U) JUN 80 6 LEIGH, W CHOWN, B...COMPUTATIONAL MODELING OF SIMULATION TESTS00 0G. Leigh W. Chown B. Harrison Eric H. Wang Civil Engineering Research Facility University of New Mexico

  16. Combined calibration and sensitivity analysis for a water quality model of the Biebrza River, Poland

    NARCIS (Netherlands)

    Perk, van der M.; Bierkens, M.F.P.

    1995-01-01

    A study was performed to quantify the error in results of a water quality model of the Biebrza River, Poland, due to uncertainties in calibrated model parameters. The procedure used in this study combines calibration and sensitivity analysis. Finally,the model was validated to test the model capabil

  17. The Sensitivity of State Differential Game Vessel Traffic Model

    Directory of Open Access Journals (Sweden)

    Lisowski Józef

    2016-04-01

    Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.

  18. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  19. Contact sensitivity in mice evaluated by means of ear swelling and a radiometric test

    Energy Technology Data Exchange (ETDEWEB)

    Baeck, O.; Larsen, A.

    1982-04-01

    Contact sensitivity to picryl chloride was investigated by means of the ear swelling test and a radiometric test in order to establish optimal experimental conditions for these assays. Contact sensitivity was demonstrated as soon as 2 days after sensitization, with a maximum reaction 3-4 days after sensitization, when a 48 hr test reaction was registered. The test reaction was followed for 72 hr and maximum was arrived at after 24 hr and 48 hr for the ear swelling test and the radiometric test, respectively. Optimal sensitization was reached with a 7% solution of picryl chloride and a maximum test reaction was found with 0.75-1.0% picryl chloride. It is concluded that both assays measure contact sensitivity in quantitative terms and a future replacement of the guinea pig maximization test is discussed.

  20. Design, validation, and absolute sensitivity of a novel test for the molecular detection of avian pneumovirus.

    Science.gov (United States)

    Cecchinato, Mattia; Catelli, Elena; Savage, Carol E; Jones, Richard C; Naylor, Clive J

    2004-11-01

    This study describes attempts to increase and measure sensitivity of molecular tests to detect avian pneumovirus (APV). Polymerase chain reaction (PCR) diagnostic tests were designed for the detection of nucleic acid from an A-type APV genome. The objective was selection of PCR oligonucleotide combinations, which would provide the greatest test sensitivity and thereby enable optimal detection when used for later testing of field materials. Relative and absolute test sensitivities could be determined because of laboratory access to known quantities of purified full-length DNA copies of APV genome derived from the same A-type virus. Four new nested PCR tests were designed in the fusion (F) protein (2 tests), small hydrophobic (SH) protein (1 test), and nucleocapsid (N) protein (1 test) genes and compared with an established test in the attachment (G) protein gene. Known amounts of full-length APV genome were serially diluted 10-fold, and these dilutions were used as templates for the different tests. Sensitivities were found to differ between the tests, the most sensitive being the established G test, which proved able to detect 6,000 copies of the G gene. The G test contained predominantly pyrimidine residues at its 3' termini, and because of this, oligonucleotides for the most sensitive F test were modified to incorporate the same residue types at their 3' termini. This was found to increase sensitivity, so that after full 3' pyrimidine substitutions, the F test became able to detect 600 copies of the F gene.

  1. Sensitivity and specificity of neuropsychological tests for dementia ...

    African Journals Online (AJOL)

    specificity of a battery of neuropsychological tests in a sample of .... psychological scores and 95% confidence intervals .... psychological tests in elderly participants ..... verbal fluency tasks in the detection of dementia of the Alzheimer type.

  2. Total Sensitivity Index Calculation of Tool Requirement Model via Error Propagation Equation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new and convenient method is presented to calculate the total sensitivity indices defined by variance-based sensitivity analysis. By decomposing the output variance using error propagation equations, this method can transform the "double-loop" sampling procedure into "single-loop" one and obviously reduce the computation cost of analysis. In contrast with Sobol's and Fourier amplitude sensitivity test (FAST) method, which is limited in non-correlated variables, the new approach is suitable for correlated input variables. An application in semiconductor assembling and test manufacturing (ATM) factory indicates that this approach has a good performance in additive model and simple non-additive model.

  3. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  4. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  5. Determination of the International Sensitivity Index of a new near-patient testing device to monitor oral anticoagulant therapy--overview of the assessment of conformity to the calibration model.

    Science.gov (United States)

    Tripodi, A; Chantarangkul, V; Clerici, M; Negri, B; Mannucci, P M

    1997-08-01

    A key issue for the reliable use of new devices for the laboratory control of oral anticoagulant therapy with the INR is their conformity to the calibration model. In the past, their adequacy has mostly been assessed empirically without reference to the calibration model and the use of International Reference Preparations (IRP) for thromboplastin. In this study we reviewed the requirements to be fulfilled and applied them to the calibration of a new near-patient testing device (TAS, Cardiovascular Diagnostics) which uses thromboplastin-containing test cards for determination of the INR. On each of 10 working days citrated whole blood and plasma samples were obtained from 2 healthy subjects and 6 patients on oral anticoagulants. PT testing on whole blood and plasma was done with the TAS and parallel testing for plasma by the manual technique with the IRP CRM 149S. Conformity to the calibration model was judged satisfactory if the following requirements were met: (i) there was a linear relationship between paired log-PTs (TAS vs CRM 149S); (ii) the regression line drawn through patients data points, passed through those of normals; (iii) the precision of the calibration expressed as the CV of the slope was <3%. A good linear relationship was observed for calibration plots for plasma and whole blood (r = 0.98). Regression lines drawn through patients data points, passed through those of normals. The CVs of the slope were in both cases 2.2% and the ISIs were 0.965 and 1.000 for whole blood and plasma. In conclusion, our study shows that near-patient testing devices can be considered reliable tools to measure INR in patients on oral anticoagulants and provides guidelines for their evaluation.

  6. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    Science.gov (United States)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  7. Size-specific sensitivity: Applying a new structured population model

    Energy Technology Data Exchange (ETDEWEB)

    Easterling, M.R.; Ellner, S.P.; Dixon, P.M.

    2000-03-01

    Matrix population models require the population to be divided into discrete stage classes. In many cases, especially when classes are defined by a continuous variable, such as length or mass, there are no natural breakpoints, and the division is artificial. The authors introduce the integral projection model, which eliminates the need for division into discrete classes, without requiring any additional biological assumptions. Like a traditional matrix model, the integral projection model provides estimates of the asymptotic growth rate, stable size distribution, reproductive values, and sensitivities of the growth rate to changes in vital rates. However, where the matrix model represents the size distributions, reproductive value, and sensitivities as step functions (constant within a stage class), the integral projection model yields smooth curves for each of these as a function of individual size. The authors describe a method for fitting the model to data, and they apply this method to data on an endangered plant species, northern monkshood (Aconitum noveboracense), with individuals classified by stem diameter. The matrix and integral models yield similar estimates of the asymptotic growth rate, but the reproductive values and sensitivities in the matrix model are sensitive to the choice of stage classes. The integral projection model avoids this problem and yields size-specific sensitivities that are not affected by stage duration. These general properties of the integral projection model will make it advantageous for other populations where there is no natural division of individuals into stage classes.

  8. Evaluating sub-national building-energy efficiency policy options under uncertainty: Efficient sensitivity testing of alternative climate, technolgical, and socioeconomic futures in a regional intergrated-assessment model.

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Daly, Don S.; Zhou, Yuyu; Rice, Jennie S.; Patel, Pralit L.; McJeon, Haewon C.; Kyle, G. Page; Kim, Son H.; Eom, Jiyong; Clarke, Leon E.

    2014-05-01

    Improving the energy efficiency of the building stock, commercial equipment and household appliances can have a major impact on energy use, carbon emissions, and building services. Subnational regions such as U.S. states wish to increase their energy efficiency, reduce carbon emissions or adapt to climate change. Evaluating subnational policies to reduce energy use and emissions is difficult because of the uncertainties in socioeconomic factors, technology performance and cost, and energy and climate policies. Climate change may undercut such policies. Assessing these uncertainties can be a significant modeling and computation burden. As part of this uncertainty assessment, this paper demonstrates how a decision-focused sensitivity analysis strategy using fractional factorial methods can be applied to reveal the important drivers for detailed uncertainty analysis.

  9. Proceedings Tenth Workshop on Model Based Testing

    OpenAIRE

    Pakulin, Nikolay; Petrenko, Alexander K.; Schlingloff, Bernd-Holger

    2015-01-01

    The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model...

  10. Remote control missile model test

    Science.gov (United States)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  11. Sensitivity and specificity of the nickel spot (dimethylglyoxime) test

    DEFF Research Database (Denmark)

    Thyssen, Jacob P; Skare, Lizbet; Lundgren, Lennart;

    2010-01-01

    The accuracy of the dimethylglyoxime (DMG) nickel spot test has been questioned because of false negative and positive test reactions. The EN 1811, a European standard reference method developed by the European Committee for Standardization (CEN), is fine-tuned to estimate nickel release around...... the limit value of the EU Nickel Directive from products intended to come into direct and prolonged skin contact. Because assessments according to EN 1811 are expensive to perform, time consuming, and may destruct the test item, it should be of great value to know the accuracy of the DMG screening test....

  12. S/P模式计算的冷却塔烟气抬升高度敏感性试验%Sensitivity test of uplifted height of flue gas from cooling tower based on S/P model

    Institute of Scientific and Technical Information of China (English)

    杨洪斌; 刘玉彻; 汪宏宇; 邹旭东; 张云海

    2011-01-01

    德国VDI3784的S/P模式为三维流体动力学积分模式,其方程主要描述了无穷小体积元素的质量、动量、静态污染物质量浓度及能量的守恒。利用德国模式进行了冷却塔烟气排放不同参数、不同大气条件下烟气抬升高度的敏感性试验。结果表明:在影响烟气抬升高度的3个气象要素(风速、气温和湿度)中,风速和气温的变化对结果影响较大,而湿度影响较小。在D类稳定度,当环境风速从0.1 m/s增加到15.0 m/s时,抬升高度从711.7 m变为38.5 m。随着环境温度的升高,抬升高度明显单调变小;当稳定度为A类,环境温度从10升到40时,烟气抬升最大高度从688.9 m降低到45.1 m,降低了14倍多。而环境湿度的变化,对抬升高度的影响不是很明显。对于E类稳定度和F类稳定度,当环境湿度从20%增加到70%,最大抬升高度分别从115.3 m和84.6 m降到112.9 m和81.7 m,分别降低了3.43%和2.08%。在影响烟气抬升高度的其他3个因素(凉水塔直径、烟气出口速度和混合气体温度)中,混合气体温度的变化对结果影响较大,而凉水塔直径和烟气出口速度的影响较小。在各类稳定度条件下,当出口温度从20变到90时,烟气抬升高度增加1.2—13.3倍;在各类稳定度条件下,当凉水塔直径从30 m变到90 m,烟气抬升高度仅增加0.63—1.40倍;在各类稳定度条件下,当出口速度从2.5 m/s变到8.0 m/s,烟气抬升高度增加了0.24—0.74倍。%The S/P model from German VDI3784 is a three-dimensional liquid dynamic integral model.Mass and momentum of infinitesimal element,mass concentration of static pollutant and energy balance were described.The sensitivity of uplifted height of flue gas from cooling tower was tested by the S/P model under different parameters and atmospheric conditions.The results indicate that wind speed,air temperature and humidity could influence the uplifted height of flue gas,especially wind speed

  13. Sensitivity of Cirrus Simulations in Idealized Situations: The WG2 Test Cases

    Science.gov (United States)

    Starr, David OC.

    1998-01-01

    GCSS Cirrus Cloud Systems Working Group (WG2) is presently conducting a comparison of cirrus cloud models for idealized initial conditions. The experiments involve binary (off/on) tests of model sensitivity to infrared radiative processes, and thermal stratification, and vertical wind shear for situations of weakly forced (3 cm/s uplift) cold (-60 to -70 C) and warm (-35 to -50 C) cirrus clouds. A range of model types are involved including parcel, SCM, 2-D CRM, 3-D CRM and LES models. The test cases will be described and results from 2-dimensional cirrus cloud models with bulk microphysics (implicit second moment scheme) and explicit bin microphysics will be compared. Vertical ice mass flux (particle fall speed) is a critical model component leading to significant intermodel differences. Efforts are ongoing to better quantify this aspect. Future plans of WG2 will also be briefly described and include model comparisons for a well-observed case of cold (ARM IOP) cirrus and of warm (EUCREX) cirrus, as well as, a joint activity with WG4 to consider the treatment of anvil cirrus in a variety of models.

  14. Sensitivity Tests for the Unprotected Events of the Prototype Gen-IV SFR

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chiwoong; Lee, Kwilim; Jeong, Jaeho; Yu, Jin; An, Sangjun; Lee, Seung Won; Chang, Wonpyo; Ha, Kwiseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Unprotected Transient Over Power, (UTOP), Unprotected Loss Of Flow (ULOF), and Unprotected Loss Of Heat Sink (ULOHS) are selected as ATWS events. Among these accidents, the ULOF event shows the lowest clad temperature. However, the ULOHS event showed the highest peak clad temperature, due to the positive CRDL/RV expansion reactivity feedback and insufficient DHRS capacity. In this study, the sensitivity tests are conducted. In the case of the UTOP event, a sensitivity test for the reactivity insertion amount and rate were conducted. This analysis can give a requirement for margin of control rod stop system (CRSS). Currently, the reactivity feedback model for the PGSFR is not validated yet. However, the reactivity feedback models in the MARS-LMR are validating with various plant-based data including EBR-II SHRT. The ATWS events for the PGSFR classified in the design extended condition including UTOP, ULOF, and ULOHS are analyzed with MARS-LMR. In this study, the sensitivity tests for reactivity insertion amount and rate in the UTOP event are conducted. The reactivity insertion amount is obviously an influential parameter. The reactivity insertion amount can give a requirement for design of the CRSS, therefore, this sensitivity result is very important to the CRSS. In addition, sensitivity tests for the weighting factor in the radial expansion reactivity model are carried out. The weighting factor for a grid plate, W{sub GP}, which means contribution of feedback in the grid plate is changed for all unprotected events. The grid plate expansion is governed by a core inlet temperature. As the W{sub GP} is increased, the power in the UTOP and the ULOF is increased, however, the power in the ULOHS is decreased. The higher power during transient means lower reactivity feedback and smaller expansion. Thus, the core outlet temperature rise is dominant in the UTOP and ULOF events, however, the core inlet temperature rise is dominant in the ULOHS. Therefore, the grid plate

  15. Sensitivity and specificity of the nickel spot (dimethylglyoxime) test

    DEFF Research Database (Denmark)

    Thyssen, Jacob P; Skare, Lizbet; Lundgren, Lennart

    2010-01-01

    the limit value of the EU Nickel Directive from products intended to come into direct and prolonged skin contact. Because assessments according to EN 1811 are expensive to perform, time consuming, and may destruct the test item, it should be of great value to know the accuracy of the DMG screening test....

  16. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  17. Sensitivity Test for Benchmark Analysis of EBR-II SHRT-17 using MARS-LMR

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chiwoong; Ha, Kwiseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    This study was conducted as a part of the IAEA Coordinated Research Project (CRP), 'Benchmark Analyses of an EBR-II Shutdown Heat Removal Test (SHRT)'. EBR-II SHRT 17 (Loss of flow) was analyzed with MARS-LMR, which is a safety analysis code for a Prototype GEN-IV Sodium-cooled Fast Reactor (PGSFR) has developed in KAERI. The current stage of the CRP is comparing blind test results with opened experimental data. Some influential parameters are selected for the sensitivity test of the EBR-II SHRT-17. The major goal of this study is to understand the behaviors of physical parameters and to make the modeling strategy for better estimation.

  18. Defining the true sensitivity of culture for the diagnosis of melioidosis using Bayesian latent class models.

    Directory of Open Access Journals (Sweden)

    Direk Limmathurotsakul

    Full Text Available BACKGROUND: Culture remains the diagnostic gold standard for many bacterial infections, and the method against which other tests are often evaluated. Specificity of culture is 100% if the pathogenic organism is not found in healthy subjects, but the sensitivity of culture is more difficult to determine and may be low. Here, we apply Bayesian latent class models (LCMs to data from patients with a single Gram-negative bacterial infection and define the true sensitivity of culture together with the impact of misclassification by culture on the reported accuracy of alternative diagnostic tests. METHODS/PRINCIPAL FINDINGS: Data from published studies describing the application of five diagnostic tests (culture and four serological tests to a patient cohort with suspected melioidosis were re-analysed using several Bayesian LCMs. Sensitivities, specificities, and positive and negative predictive values (PPVs and NPVs were calculated. Of 320 patients with suspected melioidosis, 119 (37% had culture confirmed melioidosis. Using the final model (Bayesian LCM with conditional dependence between serological tests, the sensitivity of culture was estimated to be 60.2%. Prediction accuracy of the final model was assessed using a classification tool to grade patients according to the likelihood of melioidosis, which indicated that an estimated disease prevalence of 61.6% was credible. Estimates of sensitivities, specificities, PPVs and NPVs of four serological tests were significantly different from previously published values in which culture was used as the gold standard. CONCLUSIONS/SIGNIFICANCE: Culture has low sensitivity and low NPV for the diagnosis of melioidosis and is an imperfect gold standard against which to evaluate alternative tests. Models should be used to support the evaluation of diagnostic tests with an imperfect gold standard. It is likely that the poor sensitivity/specificity of culture is not specific for melioidosis, but rather a generic

  19. Application of simplified model to sensitivity analysis of solidification process

    Directory of Open Access Journals (Sweden)

    R. Szopa

    2007-12-01

    Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.

  20. Global in Time Analysis and Sensitivity Analysis for the Reduced NS- α Model of Incompressible Flow

    Science.gov (United States)

    Rebholz, Leo; Zerfas, Camille; Zhao, Kun

    2017-09-01

    We provide a detailed global in time analysis, and sensitivity analysis and testing, for the recently proposed (by the authors) reduced NS- α model. We extend the known analysis of the model to the global in time case by proving it is globally well-posed, and also prove some new results for its long time treatment of energy. We also derive PDE system that describes the sensitivity of the model with respect to the filtering radius parameter, and prove it is well-posed. An efficient numerical scheme for the sensitivity system is then proposed and analyzed, and proven to be stable and optimally accurate. Finally, two physically meaningful test problems are simulated: channel flow past a cylinder (including lift and drag calculations) and turbulent channel flow with {Re_{τ}=590}. The numerical results reveal that sensitivity is created near boundaries, and thus this is where the choice of the filtering radius is most critical.

  1. Highly sensitive multianalyte immunochromatographic test strip for rapid chemiluminescent detection of ractopamine and salbutamol

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Hongfei; Han, Jing; Yang, Shijia; Wang, Zhenxing; Wang, Lin; Fu, Zhifeng, E-mail: fuzf@swu.edu.cn

    2014-08-11

    Graphical abstract: A multianalyte immunochromatographic test strip was developed for the rapid detection of two β{sub 2}-agonists. Due to the application of chemiluminescent detection, this quantitative method shows much higher sensitivity. - Highlights: • An immunochromatographic test strip was developed for detection of multiple β{sub 2}-agonists. • The whole assay process can be completed within 20 min. • The proposed method shows much higher sensitivity due to the application of CL detection. • It is a portable analytical tool suitable for field analysis and rapid screening. - Abstract: A novel immunochromatographic assay (ICA) was proposed for rapid and multiple assay of β{sub 2}-agonists, by utilizing ractopamine (RAC) and salbutamol (SAL) as the models. Owing to the introduction of chemiluminescent (CL) approach, the proposed protocol shows much higher sensitivity. In this work, the described ICA was based on a competitive format, and horseradish peroxidase-tagged antibodies were used as highly sensitive CL probes. Quantitative analysis of β{sub 2}-agonists was achieved by recording the CL signals of the probes captured on the two test zones of the nitrocellulose membrane. Under the optimum conditions, RAC and SAL could be detected within the linear ranges of 0.50–40 and 0.10–50 ng mL{sup −1}, with the detection limits of 0.20 and 0.040 ng mL{sup −1} (S/N = 3), respectively. The whole process for multianalyte immunoassay of RAC and SAL can be completed within 20 min. Furthermore, the test strip was validated with spiked swine urine samples and the results showed that this method was reliable in measuring β{sub 2}-agonists in swine urine. This CL-based multianalyte test strip shows a series of advantages such as high sensitivity, ideal selectivity, simple manipulation, high assay efficiency and low cost. Thus, it opens up new pathway for rapid screening and field analysis, and shows a promising prospect in food safety.

  2. Sensitivity of a Shallow-Water Model to Parameters

    CERN Document Server

    Kazantsev, Eugene

    2011-01-01

    An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...

  3. Test Prioritization based on Change Sensitivity: an Industrial Case Study

    NARCIS (Netherlands)

    Nguyen, Cu; Tonella, Paolo; Vos, Tanja; Condori, Nelly; Mendelson, Bilha; Citron, Daniel; Shehory, Onn

    2014-01-01

    In the context of service-based systems, applications access software services, either home-built or third-party, to orchestrate their functionality. Since such services evolve independently from the applications, the latter need to be tested to make sure that they work properly with the updated or

  4. Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis

    Science.gov (United States)

    2006-01-01

    Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis by Mostafiz R. Chowdhury and Ala Tabiei ARL-TR-3703...Adelphi, MD 20783-1145 ARL-TR-3703 January 2006 Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis...GRANT NUMBER 4. TITLE AND SUBTITLE Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis 5c. PROGRAM

  5. On the reliability of sensitivity test methods for submicrometer-sized RDX and HMX particles

    NARCIS (Netherlands)

    Radacsi, N.; Bouma, R.H.B.; Krabbendam-La Haye, E.L.M.; Horst, J.H. ter; Stankiewicz, A.I.; Heijden, A.E.D.M. van der

    2013-01-01

    Submicrometer-sized RDX and HMX crystals were produced by electrospray crystallization and submicrometer-sized RDX crystals were produced by plasma-assisted crystallization. Impact and friction sensitivity tests and ballistic impact chamber tests were performed to determine the product sensitivity.

  6. On the reliability of sensitivity test methods for submicrometer-sized RDX and HMX particles

    NARCIS (Netherlands)

    Radacsi, N.; Bouma, R.H.B.; Krabbendam-La Haye, E.L.M.; Horst, J.H. ter; Stankiewicz, A.I.; Heijden, A.E.D.M. van der

    2013-01-01

    Submicrometer-sized RDX and HMX crystals were produced by electrospray crystallization and submicrometer-sized RDX crystals were produced by plasma-assisted crystallization. Impact and friction sensitivity tests and ballistic impact chamber tests were performed to determine the product sensitivity.

  7. Sensitivity of some asphalts to the wheel tracking test

    OpenAIRE

    Dubois, Vincent; DE LA ROCHE, Chantal; Buisson, Sébastien

    2008-01-01

    In the framework of LCPC fatigue carrousel studies [Corté and others, 1994; Gramsammer and others, 1994], rutting measurements on several mixes have been carried out. For each mix, the granular distribution has been identical, but different types of bitumen have been chosen : classical bitumen 50/70, EVA modified bitumen and SBS modified bitumen. These mixes have been designed by a previous laboratory tests campaign. In comparing field and laboratory results, a different behaviour is observed...

  8. Improving model fidelity and sensitivity for complex systems through empirical information theory

    Science.gov (United States)

    Majda, Andrew J.; Gershgorin, Boris

    2011-01-01

    In many situations in contemporary science and engineering, the analysis and prediction of crucial phenomena occur often through complex dynamical equations that have significant model errors compared with the true signal in nature. Here, a systematic information theoretic framework is developed to improve model fidelity and sensitivity for complex systems including perturbation formulas and multimodel ensembles that can be utilized to improve both aspects of model error simultaneously. A suite of unambiguous test models is utilized to demonstrate facets of the proposed framework. These results include simple examples of imperfect models with perfect equilibrium statistical fidelity where there are intrinsic natural barriers to improving imperfect model sensitivity. Linear stochastic models with multiple spatiotemporal scales are utilized to demonstrate this information theoretic approach to equilibrium sensitivity, the role of increasing spatial resolution in the information metric for model error, and the ability of imperfect models to capture the true sensitivity. Finally, an instructive statistically nonlinear model with many degrees of freedom, mimicking the observed non-Gaussian statistical behavior of tracers in the atmosphere, with corresponding imperfect eddy-diffusivity parameterization models are utilized here. They demonstrate the important role of additional stochastic forcing of imperfect models in order to systematically improve the information theoretic measures of fidelity and sensitivity developed here. PMID:21646534

  9. Testing Strategies for Model-Based Development

    Science.gov (United States)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  10. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  11. Detecting tipping points in ecological models with sensitivity analysis

    NARCIS (Netherlands)

    Broeke, G.A. ten; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, J.

    2016-01-01

    Simulation models are commonly used to understand and predict the developmentof ecological systems, for instance to study the occurrence of tipping points and their possibleecological effects. Sensitivity analysis is a key tool in the study of model responses to change s in conditions. The applicabi

  12. Detecting Tipping points in Ecological Models with Sensitivity Analysis

    NARCIS (Netherlands)

    Broeke, ten G.A.; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, Jaap

    2016-01-01

    Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possible ecological effects. Sensitivity analysis is a key tool in the study of model responses to changes in conditions. The appli

  13. ABSTRACT: CONTAMINANT TRAVEL TIMES FROM THE NEVADA TEST SITE TO YUCCA MOUNTAIN: SENSITIVITY TO POROSITY

    Energy Technology Data Exchange (ETDEWEB)

    Karl F. Pohlmann; Jianting Zhu; Jenny B. Chapman; Charles E. Russell; Rosemary W. H. Carroll; David S. Shafer

    2008-09-05

    Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as a geologic repository for spent nuclear fuel and high-level radioactive waste. In this study, we investigate the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to the YM area by estimating the timeframe for advective travel and its uncertainty resulting from porosity value uncertainty for hydrogeologic units (HGUs) in the region. We perform sensitivity analysis to determine the most influential HGUs on advective radionuclide travel times from the NTS to the YM area. Groundwater pathways and advective travel times are obtained using the particle tracking package MODPATH and flow results from the Death Valley Regional Flow System (DVRFS) model by the U.S. Geological Survey. Values and uncertainties of HGU porosities are quantified through evaluation of existing site porosity data and expert professional judgment and are incorporated through Monte Carlo simulations to estimate mean travel times and uncertainties. We base our simulations on two steady state flow scenarios for the purpose of long term prediction and monitoring. The first represents pre-pumping conditions prior to groundwater development in the area in 1912 (the initial stress period of the DVRFS model). The second simulates 1998 pumping (assuming steady state conditions resulting from pumping in the last stress period of the DVRFS model). Considering underground tests in a clustered region around Pahute Mesa on the NTS as initial particle positions, we track these particles forward using MODPATH to identify hydraulically downgradient groundwater discharge zones and to determine which flowpaths will intercept the YM area. Out of the 71 tests in the saturated zone, flowpaths of 23 intercept the YM area under the pre-pumping scenario. For the 1998 pumping scenario, flowpaths from 55 of the 71 tests intercept the YM area. The results illustrate that mean

  14. Contaminant Travel Times From the Nevada Test Site to Yucca Mountain: Sensitivity to Porosity

    Science.gov (United States)

    Pohlmann, K. F.; Zhu, J.; Chapman, J. B.; Russell, C. E.; Carroll, R. W.; Shafer, D. S.

    2008-12-01

    Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as a geologic repository for spent nuclear fuel and high-level radioactive waste. In this study, we investigate the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to the YM area by estimating the time frame for advective travel and its uncertainty resulting from porosity value uncertainty for hydrogeologic units (HGUs) in the region. We perform sensitivity analysis to determine the most influential HGUs on advective radionuclide travel times from the NTS to the YM area. Groundwater pathways and advective travel times are obtained using the particle tracking package MODPATH and flow results from the Death Valley Regional Flow System (DVRFS) model by the U.S. Geological Survey. Values and uncertainties of HGU porosities are quantified through evaluation of existing site porosity data and expert professional judgment and are incorporated through Monte Carlo simulations to estimate mean travel times and uncertainties. We base our simulations on two steady state flow scenarios for the purpose of long term prediction and monitoring. The first represents pre-pumping conditions prior to groundwater development in the area in 1912 (the initial stress period of the DVRFS model). The second simulates 1998 pumping (assuming steady state conditions resulting from pumping in the last stress period of the DVRFS model). Considering underground tests in a clustered region around Pahute Mesa on the NTS as initial particle positions, we track these particles forward using MODPATH to identify hydraulically downgradient groundwater discharge zones and to determine which flowpaths will intercept the YM area. Out of the 71 tests in the saturated zone, flowpaths of 23 intercept the YM area under the pre-pumping scenario. For the 1998 pumping scenario, flowpaths from 55 of the 71 tests intercept the YM area. The results illustrate that mean

  15. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    Science.gov (United States)

    Grouillet, Benjamin; Ruelland, Denis; Vaittinada Ayar, Pradeebane; Vrac, Mathieu

    2016-03-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20-year period (1986-2005) to capture different climatic conditions in the basins. The daily GR4j conceptual model was used to simulate streamflow that was eventually evaluated at a 10-day time step. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled data sets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the cumulative distribution function-transform approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two general circulation models (GCMs) (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement in runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range

  16. Age Sensitivity of Behavioral Tests and Brain Substrates of Normal Aging in Mice

    OpenAIRE

    Kennard, John A.; Woodruff-Pak, Diana S.

    2011-01-01

    Knowledge of age sensitivity, the capacity of a behavioral test to reliably detect age-related changes, has utility in the design of experiments to elucidate processes of normal aging. We review the application of these tests in studies of normal aging and compare and contrast the age sensitivity of the Barnes maze, eyeblink classical conditioning, fear conditioning, Morris water maze, and rotorod. These tests have all been implemented to assess normal age-related changes in learning and memo...

  17. Sensitization capacity of acrylated prepolymers in ultraviolet curing inks tested in the guinea pig.

    Science.gov (United States)

    Björkner, B

    1981-01-01

    One commonly used prepolymer in ultraviolet (UV) curing inks is epoxy acrylate. Of 6 men with dermatitis contracted from UV-curing inks, 2 had positive patch test reaction to epoxy acrylate. None reacted to the chemically related bisphenol A dimethacrylate. The sensitization capacity of epoxy acrylate and bisphenol A dimethacrylate performed with the "Guinea pig maximization test" (GPM) shows epoxy acrylate to be an extreme sensitizer and bisphenol A dimethacrylate a moderate sensitizer. Cross-reaction between the two substances occurs. The epoxy resin oligomer MW 340 present in the epoxy acrylate also sensitized some animals.

  18. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  19. Linear Logistic Test Modeling with R

    Directory of Open Access Journals (Sweden)

    Purya Baghaei

    2014-01-01

    Full Text Available The present paper gives a general introduction to the linear logistic test model (Fischer, 1973, an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014 functions to estimate the model and interpret its parameters. The applications of the model in test validation, hypothesis testing, cross-cultural studies of test bias, rule-based item generation, and investigating construct irrelevant factors which contribute to item difficulty are explained. The model is applied to an English as a foreign language reading comprehension test and the results are discussed.

  20. Testing Linear Models for Ability Parameters in Item Response Models

    NARCIS (Netherlands)

    Glas, Cees A.W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like

  1. Identifying Spatially Variable Sensitivity of Model Predictions and Calibrations

    Science.gov (United States)

    McKenna, S. A.; Hart, D. B.

    2005-12-01

    Stochastic inverse modeling provides an ensemble of stochastic property fields, each calibrated to measured steady-state and transient head data. These calibrated fields are used as input for predictions of other processes (e.g., contaminant transport, advective travel time). Use of the entire ensemble of fields transfers spatial uncertainty in hydraulic properties to uncertainty in the predicted performance measures. A sampling-based sensitivity coefficient is proposed to determine the sensitivity of the performance measures to the uncertain values of hydraulic properties at every cell in the model domain. The basis of this sensitivity coefficient is the Spearman rank correlation coefficient. Sampling-based sensitivity coefficients are demonstrated using a recent set of transmissivity (T) fields created through a stochastic inverse calibration process for the Culebra dolomite in the vicinity of the WIPP site in southeastern New Mexico. The stochastic inverse models were created using a unique approach to condition a geologically-based conceptual model of T to measured T values via a multiGaussian residual field. This field is calibrated to both steady-state and transient head data collected over an 11 year period. Maps of these sensitivity coefficients provide a means of identifying the locations in the study area to which both the value of the model calibration objective function and the predicted travel times to a regulatory boundary are most sensitive to the T and head values. These locations can be targeted for deployment of additional long-term monitoring resources. Comparison of areas where the calibration objective function and the travel time have high sensitivity shows that these are not necessarily coincident with regions of high uncertainty. The sampling-based sensitivity coefficients are compared to analytically derived sensitivity coefficients at the 99 pilot point locations. Results of the sensitivity mapping exercise are being used in combination

  2. Testing linearity against nonlinear moving average models

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.

    1998-01-01

    Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared

  3. Sensitivity and specificity of point-of-care rapid combination syphilis-HIV-HCV tests.

    Directory of Open Access Journals (Sweden)

    Kristen L Hess

    Full Text Available New rapid point-of-care (POC tests are being developed that would offer the opportunity to increase screening and treatment of several infections, including syphilis. This study evaluated three of these new rapid POC tests at a site in Southern California.Participants were recruited from a testing center in Long Beach, California. A whole blood specimen was used to evaluate the performance of the Dual Path Platform (DPP Syphilis Screen & Confirm, DPP HIV-Syphilis, and DPP HIV-HCV-Syphilis rapid tests. The gold-standard comparisons were Treponema pallidum passive particle agglutination (TPPA, rapid plasma reagin (RPR, HCV enzyme immunoassay (EIA, and HIV-1/2 EIA.A total of 948 whole blood specimens were analyzed in this study. The sensitivity of the HIV tests ranged from 95.7-100% and the specificity was 99.7-100%. The sensitivity and specificity of the HCV test were 91.8% and 99.3%, respectively. The treponemal-test sensitivity when compared to TPPA ranged from 44.0-52.7% and specificity was 98.7-99.6%. The non-treponemal test sensitivity and specificity when compared to RPR was 47.8% and 98.9%, respectively. The sensitivity of the Screen & Confirm test improved to 90.0% when cases who were both treponemal and nontreponemal positive were compared to TPPA+/RPR ≥ 1 ∶ 8.The HIV and HCV on the multi-infection tests showed good performance, but the treponemal and nontreponemal tests had low sensitivity. These results could be due to a low prevalence of active syphilis in the sample population because the sensitivity improved when the gold standard was limited to those more likely to be active cases. Further evaluation of the new syphilis POC tests is required before implementation into testing programs.

  4. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  5. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2017-01-01

    Product developers using life cycle toxicity characterization models to understand the potential impacts of chemical emissions face serious challenges related to large data demands and high input data uncertainty. This motivates greater focus on model sensitivity toward input parameter variability...... to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...

  6. Sensitivity analysis of the fission gas behavior model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  7. Sensitivity analysis of the age-structured malaria transmission model

    Science.gov (United States)

    Addawe, Joel M.; Lope, Jose Ernie C.

    2012-09-01

    We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.

  8. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  9. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    Science.gov (United States)

    Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.

    2015-04-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.

  10. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  11. Physiologically based pharmacokinetic modeling of a homologous series of barbiturates in the rat: a sensitivity analysis.

    Science.gov (United States)

    Nestorov, I A; Aarons, L J; Rowland, M

    1997-08-01

    Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall

  12. Effects of disclosing hypothetical genetic test results for salt sensitivity on salt restriction behavior

    Directory of Open Access Journals (Sweden)

    Takeshima T

    2013-05-01

    Full Text Available Taro Takeshima,1,2 Masanobu Okayama,1 Masanori Harada,3 Ryusuke Ae,4 Eiji Kajii1 1Division of Community and Family Medicine, Center for Community Medicine, Jichi Medical University, Tochigi, Japan; 2Department of Healthcare Epidemiology, Kyoto University Graduate School of Medicine and Public Health, Kyoto, Japan; 3Department for Support of Rural Medicine, Yamaguchi Grand Medical Center, Yamaguchi, Japan; 4Department of General Internal Medicine, Hamasaka Public Hospital, Mikata, Japan Background: A few studies have explored the effects of disclosure of genetic testing results on chronic disease predisposition. However, these effects remain unclear in cases of hypertension. Reducing salt intake is an important nonpharmacological intervention for hypertension. We investigated the effects of genetic testing for salt sensitivity on salt restriction behavior using hypothetical genetic testing results. Methods: We conducted a cross-sectional study using a self-completed questionnaire. We enrolled consecutive outpatients who visited primary care clinics and small hospitals between September and December 2009 in Japan. We recorded the patients’ baseline characteristics and data regarding their salt restriction behavior, defined as reducing salt intake before and after disclosure of hypothetical salt sensitivity genetic test results. Behavioral stage was assessed according to the five-stage transtheoretical model. After dividing subjects into salt restriction and no salt restriction groups, we compared their behavioral changes following positive and negative test results and analyzed the association between the respondents’ characteristics and their behavioral changes. Results: We analyzed 1562 participants with a mean age of 58 years. In the no salt restriction group, which included patients at the precontemplation, contemplation, and preparation stages, 58.7% stated that their behavioral stage progressed after a positive test result, although 29

  13. Improved sensitivity testing of explosives using transformed Up-Down methods

    Science.gov (United States)

    Brown, Geoffrey W.

    2014-05-01

    Sensitivity tests provide data that help establish guidelines for the safe handling of explosives. Any sensitivity test is based on assumptions to simplify the method or reduce the number of individual sample evaluations. Two common assumptions that are not typically checked after testing are 1) explosive response follows a normal distribution as a function of the applied stimulus levels and 2) the chosen test level spacing is close to the standard deviation of the explosive response function (for Bruceton Up-Down testing for example). These assumptions and other limitations of traditional explosive sensitivity testing can be addressed using Transformed Up-Down (TUD) test methods. TUD methods have been developed extensively for psychometric testing over the past 50 years and generally use multiple tests at a given level to determine how to adjust the applied stimulus. In the context of explosive sensitivity we can use TUD methods that concentrate testing around useful probability levels. Here, these methods are explained and compared to Bruceton Up-Down testing using computer simulation. The results show that the TUD methods are more useful for many cases but that they do require more tests as a consequence. For non-normal distributions, however, the TUD methods may be the only accurate assessment method.

  14. Parametric Sensitivity Tests—European Polymer Electrolyte Membrane Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests...

  15. Shape sensitivity analysis in numerical modelling of solidification

    Directory of Open Access Journals (Sweden)

    E. Majchrzak

    2007-12-01

    Full Text Available The methods of sensitivity analysis constitute a very effective tool on the stage of numerical modelling of casting solidification. It is possible, among others, to rebuilt the basic numerical solution on the solution concerning the others disturbed values of physical and geometrical parameters of the process. In this paper the problem of shape sensitivity analysis is discussed. The non-homogeneous casting-mould domain is considered and the perturbation of the solidification process due to the changes of geometrical dimensions is analyzed. From the mathematical point of view the sensitivity model is rather complex but its solution gives the interesting information concerning the mutual connections between the kinetics of casting solidification and its basic dimensions. In the final part of the paper the example of computations is shown. On the stage of numerical realization the finite difference method has been applied.

  16. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension ki

  17. Propfan test assessment testbed aircraft flutter model test report

    Science.gov (United States)

    Jenness, C. M. J.

    1987-01-01

    The PropFan Test Assessment (PTA) program includes flight tests of a propfan power plant mounted on the left wind of a modified Gulfstream II testbed aircraft. A static balance boom is mounted on the right wing tip for lateral balance. Flutter analyses indicate that these installations reduce the wing flutter stabilizing speed and that torsional stiffening and the installation of a flutter stabilizing tip boom are required on the left wing for adequate flutter safety margins. Wind tunnel tests of a 1/9th scale high speed flutter model of the testbed aircraft were conducted. The test program included the design, fabrication, and testing of the flutter model and the correlation of the flutter test data with analysis results. Excellent correlations with the test data were achieved in posttest flutter analysis using actual model properties. It was concluded that the flutter analysis method used was capable of accurate flutter predictions for both the (symmetric) twin propfan configuration and the (unsymmetric) single propfan configuration. The flutter analysis also revealed that the differences between the tested model configurations and the current aircraft design caused the (scaled) model flutter speed to be significantly higher than that of the aircraft, at least for the single propfan configuration without a flutter boom. Verification of the aircraft final design should, therefore, be based on flutter predictions made with the test validated analysis methods.

  18. Parameter identification and global sensitivity analysis of Xinanjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng SONG

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  19. 75 FR 35712 - National Pollutant Discharge Elimination System (NPDES): Use of Sufficiently Sensitive Test...

    Science.gov (United States)

    2010-06-23

    ... sensitive'' analytical methods with respect to measurement of mercury and extend the approach outlined in...): Use of Sufficiently Sensitive Test Methods for Permit Applications and Reporting AGENCY: Environmental... methods can be used when completing an NPDES permit application and when performing sampling and...

  20. Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1

    Science.gov (United States)

    Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong

    2016-08-01

    Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 2004. doi: 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.

  1. Integrating non-animal test information into an adaptive testing strategy - skin sensitization proof of concept case.

    Science.gov (United States)

    Jaworska, Joanna; Harol, Artsiom; Kern, Petra S; Gerberick, G Frank

    2011-01-01

    There is an urgent need to develop data integration and testing strategy frameworks allowing interpretation of results from animal alternative test batteries. To this end, we developed a Bayesian Network Integrated Testing Strategy (BN ITS) with the goal to estimate skin sensitization hazard as a test case of previously developed concepts (Jaworska et al., 2010). The BN ITS combines in silico, in chemico, and in vitro data related to skin penetration, peptide reactivity, and dendritic cell activation, and guides testing strategy by Value of Information (VoI). The approach offers novel insights into testing strategies: there is no one best testing strategy, but the optimal sequence of tests depends on information at hand, and is chemical-specific. Thus, a single generic set of tests as a replacement strategy is unlikely to be most effective. BN ITS offers the possibility of evaluating the impact of generating additional data on the target information uncertainty reduction before testing is commenced.

  2. An approach to measure parameter sensitivity in watershed hydrological modelling

    Science.gov (United States)

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  3. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  4. A model for perception-based identification of sensitive skin

    NARCIS (Netherlands)

    Richters, R.J.H.; Uzunbajakava, N.E.; Hendriks, J.C.; Bikker, J.W.; Erp, P.E.J. van; Kerkhof, P.C.M. van de

    2017-01-01

    BACKGROUND: With high prevalence of sensitive skin (SS), lack of strong evidence on pathomechanisms, consensus on associated symptoms, proof of existence of 'general' SS and tools to recruit subjects, this topic attracts increasing attention of research. OBJECTIVE: To create a model for selecting

  5. Culturally Sensitive Dementia Caregiving Models and Clinical Practice

    Science.gov (United States)

    Daire, Andrew P.; Mitcham-Smith, Michelle

    2006-01-01

    Family caregiving for individuals with dementia is an increasingly complex issue that affects the caregivers' and care recipients' physical, mental, and emotional health. This article presents 3 key culturally sensitive caregiver models along with clinical interventions relevant for mental health counseling professionals.

  6. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus......, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging....

  7. Repeated patch testing to nickel during childhood do not induce nickel sensitization

    DEFF Research Database (Denmark)

    Søgaard Christiansen, Elisabeth

    2014-01-01

    Background: Previously, patch test reactivity to nickel sulphate in a cohort of unselected infants tested repeatedly at 3-72 months of age has been reported. A reproducible positive reaction at 12 and 18 months was selected as a sign of nickel sensitivity, provided a patch test with an empty Finn...... chamber was negative. The objective of this study is to follow-up on infants with suspected nickel sensitivity. Methods: A total of 562 infants were included in the cohort and patch tested with nickel sulphate. The 26 children with a positive patch test to nickel sulphate at 12 and 18 months were offered...... repeated patch test to nickel sulphate at 3 (36 months), 6 (72 months) and 14 years of age. Results: At 3 years, 24 of 26 nickel sensitive children were retested and a positive reaction was seen in 7 children, a negative reaction in 16 and 1 child was excluded due to reaction to both nickel and the empty...

  8. A Conceptual Model for Water Sensitive City in Surabaya

    Science.gov (United States)

    Pamungkas, A.; Tucunan, K. P.; Navastara, A.; Idajati, H.; Pratomoatmojo, N. A.

    2017-08-01

    Frequent inundated areas, low quality of water supply, highly dependent water sources from external are some key problems in Surabaya water balance. Many aspects of urban development have stimulated those problems. To uncover the complexity of water balance in Surabaya, a conceptual model for water sensitive city is constructed to find the optimum solution. A system dynamic modeling is utilized to assist and enrich the idea of conceptual model. A secondary analysis to a wide range data directs the process in making a conceptual model. FGD involving many experts from multidiscipline are also used to finalize the conceptual model. Based on those methods, the model has four main sub models that are; flooding, land use change, water demand and water supply. The model consists of 35 key variables illustrating challenges in Surabaya urban water.

  9. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    Full Text Available BACKGROUND AND AIMS: Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity. METHODS: Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity. RESULTS: When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse. CONCLUSIONS: Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  10. Testing of constitutive models in LAME.

    Energy Technology Data Exchange (ETDEWEB)

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    Constitutive models for computational solid mechanics codes are in LAME--the Library of Advanced Materials for Engineering. These models describe complex material behavior and are used in our finite deformation solid mechanics codes. To ensure the correct implementation of these models, regression tests have been created for constitutive models in LAME. A selection of these tests is documented here. Constitutive models are an important part of any solid mechanics code. If an analysis code is meant to provide accurate results, the constitutive models that describe the material behavior need to be implemented correctly. Ensuring the correct implementation of constitutive models is the goal of a testing procedure that is used with the Library of Advanced Materials for Engineering (LAME) (see [1] and [2]). A test suite for constitutive models can serve three purposes. First, the test problems provide the constitutive model developer a means to test the model implementation. This is an activity that is always done by any responsible constitutive model developer. Retaining the test problem in a repository where the problem can be run periodically is an excellent means of ensuring that the model continues to behave correctly. A second purpose of a test suite for constitutive models is that it gives application code developers confidence that the constitutive models work correctly. This is extremely important since any analyst that uses an application code for an engineering analysis will associate a constitutive model in LAME with the application code, not LAME. Therefore, ensuring the correct implementation of constitutive models is essential for application code teams. A third purpose of a constitutive model test suite is that it provides analysts with example problems that they can look at to understand the behavior of a specific model. Since the choice of a constitutive model, and the properties that are used in that model, have an enormous effect on the results of an

  11. Bioluminometric assay of ATP in mouse brain: Determinant factors for enhanced test sensitivity

    Indian Academy of Sciences (India)

    Haseeb Ahmad Khan

    2003-06-01

    Firefly luciferase bioluminescence (FLB) is a highly sensitive and specific method for the analysis of adenosine-5-triphosphate (ATP) in biological samples. Earlier attempts to modify the FLB test for enhanced sensitivity have been typically based on in vitro cell systems. This study reports an optimized FLB procedure for the analysis of ATP in small tissue samples. The results showed that the sensitivity of the FLB test can be enhanced several fold by using ultraturax homogenizer, perchloric acid extraction, neutralization of acid extract and its optimal dilution, before performing the assay reaction.

  12. GEOCHEMICAL TESTING AND MODEL DEVELOPMENT - RESIDUAL TANK WASTE TEST PLAN

    Energy Technology Data Exchange (ETDEWEB)

    CANTRELL KJ; CONNELLY MP

    2010-03-09

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  13. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  14. A new non-randomized model for analysing sensitive questions with binary outcomes.

    Science.gov (United States)

    Tian, Guo-Liang; Yu, Jun-Wu; Tang, Man-Lai; Geng, Zhi

    2007-10-15

    We propose a new non-randomized model for assessing the association of two sensitive questions with binary outcomes. Under the new model, respondents only need to answer a non-sensitive question instead of the original two sensitive questions. As a result, it can protect a respondent's privacy, avoid the usage of any randomizing device, and be applied to both the face-to-face interview and mail questionnaire. We derive the constrained maximum likelihood estimates of the cell probabilities and the odds ratio for two binary variables associated with the sensitive questions via the EM algorithm. The corresponding standard error estimates are then obtained by bootstrap approach. A likelihood ratio test and a chi-squared test are developed for testing association between the two binary variables. We discuss the loss of information due to the introduction of the non-sensitive question, and the design of the co-operative parameters. Simulations are performed to evaluate the empirical type I error rates and powers for the two tests. In addition, a simulation is conducted to study the relationship between the probability of obtaining valid estimates and the sample size for any given cell probability vector. A real data set from an AIDS study is used to illustrate the proposed methodologies.

  15. Spatial sensitivity analysis of snow cover data in a distributed rainfall–runoff model

    Directory of Open Access Journals (Sweden)

    T. Berezowski

    2014-10-01

    Full Text Available As the availability of spatially distributed data sets for distributed rainfall–runoff modelling is strongly growing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis (SA method for snow cover fraction input data (SCF for a distributed rainfall–runoff model to investigate if the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focused on the relation between the SCF sensitivity and the physical, spatial parameters and processes of a distributed rainfall–runoff model. The methodology is tested for the Biebrza River catchment, Poland for which a distributed WetSpa model is setup to simulate two years of daily runoff. The SA uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT algorithm, which uses different response functions for each 4 km × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as: geomorphology, soil texture, land-use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for the spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model.

  16. Stochastic sensitivity of a bistable energy model for visual perception

    Science.gov (United States)

    Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.

  17. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  18. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  19. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  20. Effect of test duration and feeding on relative sensitivity of genetically distinct clades of Hyalella azteca.

    Science.gov (United States)

    Soucek, David J; Dickinson, Amy; Major, Kaley M; McEwen, Abigail R

    2013-11-01

    The amphipod Hyalella azteca is widely used in ecotoxicology laboratories for the assessment of chemical risks to aquatic environments, and it is a cryptic species complex with a number of genetically distinct strains found in wild populations. While it would be valuable to note differences in contaminant sensitivity among different strains collected from various field sites, those findings would be influenced by acclimation of the populations to local conditions. In addition, potential differences in metabolism or lipid storage among different strains may confound assessment of sensitivity in unfed acute toxicity tests. In the present study, our aim was to assess whether there are genetic differences in contaminant sensitivity among three cryptic provisional species of H. azteca. Therefore, we used organisms cultured under the same conditions, assessed their ability to survive for extended periods without food, and conducted fed and unfed acute toxicity tests with two anions (nitrate and chloride) whose toxicities are not expected to be altered by the addition of food. We found that the three genetically distinct clades of H. azteca had substantially different responses to starvation, and the presence/absence of food during acute toxicity tests had a strong role in determining the relative sensitivity of the three clades. In fed tests, where starvation was no longer a potential stressor, significant differences in sensitivity were still observed among the three clades. In light of these differences in sensitivity, we suggest that ecotoxicology laboratories consider using a provisional species in toxicity tests that is a regionally appropriate surrogate.

  1. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  2. Estimation of the relative sensitivity of the comparative tuberculin skin test in tuberculous cattle herds subjected to depopulation.

    Directory of Open Access Journals (Sweden)

    Katerina Karolemeas

    Full Text Available Bovine tuberculosis (bTB is one of the most serious economic animal health problems affecting the cattle industry in Great Britain (GB, with incidence in cattle herds increasing since the mid-1980s. The single intradermal comparative cervical tuberculin (SICCT test is the primary screening test in the bTB surveillance and control programme in GB and Ireland. The sensitivity (ability to detect infected cattle of this test is central to the efficacy of the current testing regime, but most previous studies that have estimated test sensitivity (relative to the number of slaughtered cattle with visible lesions [VL] and/or positive culture results lacked post-mortem data for SICCT test-negative cattle. The slaughter of entire herds ("whole herd slaughters" or "depopulations" that are infected by bTB are occasionally conducted in GB as a last-resort control measure to resolve intractable bTB herd breakdowns. These provide additional post-mortem data for SICCT test-negative cattle, allowing a rare opportunity to calculate the animal-level sensitivity of the test relative to the total number of SICCT test-positive and negative VL animals identified post-mortem (rSe. In this study, data were analysed from 16 whole herd slaughters (748 SICCT test-positive and 1031 SICCT test-negative cattle conducted in GB between 1988 and 2010, using a bayesian hierarchical model. The overall rSe estimate of the SICCT test at the severe interpretation was 85% (95% credible interval [CI]: 78-91%, and at standard interpretation was 81% (95% CI: 70-89%. These estimates are more robust than those previously reported in GB due to inclusion of post-mortem data from SICCT test-negative cattle.

  3. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  4. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  5. Climate Sensitivity and Solar Cycle Response in Climate Models

    Science.gov (United States)

    Liang, M.; Lin, L.; Tung, K. K.; Yung, Y. L.

    2011-12-01

    Climate sensitivity, broadly defined, is a measure of the response of the climate system to the changes of external forcings such as anthropogenic greenhouse emissions and solar radiation, including climate feedback processes. General circulation models provide a means to quantitatively incorporate various feedback processes, such as water-vapor, cloud and albedo feedbacks. Less attention is devoted so far to the role of the oceans in significantly affecting these processes and hence the modelled transient climate sensitivity. Here we show that the oceanic mixing plays an important role in modifying the multi-decadal to centennial oscillations of the sea surface temperature, which in turn affect the derived climate sensitivity at various phases of the oscillations. The eleven-year solar cycle forcing is used to calibrate the response of the climate system. The GISS-EH coupled atmosphere-ocean model was run twice in coupled mode for more than 2000 model years, each with a different value for the ocean eddy mixing parameter. In both runs, there is a prominent low-frequency oscillation with a period of 300-500 years, and depending on the phase of such an oscillation, the derived climate gain factor varies by a factor of 2. The run with the value of the eddy ocean mixing parameter that is half that used in IPCC AR4 study has the more realistic low-frequency variability in SST and in the derived response to the known solar-cycle forcing.

  6. Sensitivity Analysis in a Complex Marine Ecological Model

    Directory of Open Access Journals (Sweden)

    Marcos D. Mateus

    2015-05-01

    Full Text Available Sensitivity analysis (SA has long been recognized as part of best practices to assess if any particular model can be suitable to inform decisions, despite its uncertainties. SA is a commonly used approach for identifying important parameters that dominate model behavior. As such, SA address two elementary questions in the modeling exercise, namely, how sensitive is the model to changes in individual parameter values, and which parameters or associated processes have more influence on the results. In this paper we report on a local SA performed on a complex marine biogeochemical model that simulates oxygen, organic matter and nutrient cycles (N, P and Si in the water column, and well as the dynamics of biological groups such as producers, consumers and decomposers. SA was performed using a “one at a time” parameter perturbation method, and a color-code matrix was developed for result visualization. The outcome of this study was the identification of key parameters influencing model performance, a particularly helpful insight for the subsequent calibration exercise. Also, the color-code matrix methodology proved to be effective for a clear identification of the parameters with most impact on selected variables of the model.

  7. Is p-tert-butylphenol-formaldehyde resin (PTBP-FR) in TRUE Test® (Mekos test) sensitizing the tested patients?

    Science.gov (United States)

    Stenberg, Berndt; Bruze, Magnus; Zimerson, Erik

    2015-12-01

    In a population study using TRUE Test®, we noted late reactions to p-tert-butylphenol-formaldehyde resin (PTBP-FR) in 0.5% of subjects tested. In order to explore possible test sensitization, differences in the contents of sensitizers within PTBP-FR in test preparations for TRUE Test® and Finn Chambers® were analysed. Subjects allergic to PTBP-FR and subjects with late reactions to PTBP-FR were retested in order to explore whether these groups reacted to different PTBP-FR sensitizers. Four individuals with late reactions and 5 subjects with established allergy to PTBP-FR were retested with defined PTBP-FR sensitizers. PTBP-FR constituents in patches from TRUE Test® were analysed with high-performance liquid chromatography. Previously analysed samples of PTBP-FR constituents served as a reference. The pattern of reaction to PTBP-FR sensitizers was similar in both groups. Subjects with suspected sensitization had somewhat stronger reactions than controls. The concentrations of monomers, dimers and trimers were generally higher in the TRUE Test® resin than in reference substances. Retesting did not add information regarding causes of possible sensitization. Analysis showed that the resin used in TRUE Test® has a lower degree of polymerization or condensation, which may enhance its sensitizing properties. A follow-up of late reactions to PTBP-FR in TRUE Test® should be carried out. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  9. Sensitive KIT D816V mutation analysis of blood as a diagnostic test in mastocytosis

    DEFF Research Database (Denmark)

    Kielsgaard Kristensen, Thomas; Vestergaard, Hanne; Bindslev-Jensen, Carsten;

    2014-01-01

    The recent progress in sensitive KIT D816V mutation analysis suggests that mutation analysis of peripheral blood (PB) represents a promising diagnostic test in mastocytosis. However, there is a need for systematic assessment of the analytical sensitivity and specificity of the approach in order...... the mutation in PB in nearly all adult mastocytosis patients. The mutation was detected in PB in 78 of 83 systemic mastocytosis (94%) and 3 of 4 cutaneous mastocytosis patients (75%). The test was 100% specific as determined by analysis of clinically relevant control patients who all tested negative. Mutation...

  10. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Best, Ralph E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Maheras, Steven J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jensen, Philip J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); England, Jeffery L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); LeDuc, Dan [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-09-25

    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  11. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B.; Best, Ralph E.; Maheras, Steven J.; Jensen, Philip J.; England, Jeffery L.; LeDuc, Dan

    2014-09-24

    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  12. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  13. 12th Rencontres du Vietnam : High Sensitivity Experiments Beyond the Standard Model

    CERN Document Server

    2016-01-01

    The goal of this workshop is to gather researchers, theoreticians, experimentalists and young scientists searching for physics beyond the Standard Model of particle physics using high sensitivity experiments. The standard model has been very successful in describing the particle physics world; the Higgs-Englert-Brout boson discovery is its last major discovery. Complementary to the high energy frontier explored at colliders, real opportunities for discovery exist at the precision frontier, testing fundamental symmetries and tracking small SM deviations.

  14. Colour Reconnection - Models and Tests

    CERN Document Server

    Christiansen, Jesper R

    2015-01-01

    Recent progress on colour reconnection within the Pythia framework is presented. A new model is introduced, based on the SU(3) structure of QCD and a minimization of the potential string energy. The inclusion of the epsilon structure of SU(3) gives a new baryon production mechanism and makes it possible simultaneously to describe hyperon production at both $e^+e^-$ and pp colliders. Finally, predictions for $e^+e^-$ colliders, both past and potential future ones, are presented.

  15. Estimation of sensitivity and specificity of five serological tests for the diagnosis of porcine brucellosis.

    Science.gov (United States)

    Praud, Anne; Gimenez, Olivier; Zanella, Gina; Dufour, Barbara; Pozzi, Nathalie; Antras, Valérie; Meyer, Laurence; Garin-Bastuji, Bruno

    2012-04-01

    While serological tests are essential in surveillance and control programs of animal diseases, to date none of the common serological tests approved in the EU (complement fixation test or Rose-Bengal test) has been shown to be reliable in routine individual diagnosis of porcine brucellosis, and some more recent tests like ELISA have not been fully evaluated yet. In the absence of a gold standard, this study allowed the estimation of sensitivities and specificities of these tests with a Bayesian approach using Markov Chain Monte Carlo algorithms. The pig population that was tested included 6422 animals from Metropolitan France. Serum samples were collected from a large population of pigs, representative of European swine population and tested with five brucellosis serological tests: Rose-Bengal test (RBT), fluorescence polarization assay (FPA), indirect ELISA (I-ELISA) and two competitive ELISAs (C-ELISA). The sensitivity and the specificity of each test were estimated. When doubtful results were excluded, the most sensitive and specific test was C-ELISA(2) (Se C-ELISA(2)=0.964, [0.907; 0.994], 95% credibility interval (CrI); Sp C-ELISA(2)=0.996, [0.982; 1.0], 95% CrI). When doubtful results were considered as negative, C-ELISA(2) was still the most sensitive and specific test (Se C-ELISA(2)=0.960, [0.896; 0.994], 95% CrI and Sp C-ELISA(2)=0.994, [0.977; 0.999], 95% CrI). The same conclusions were reached when doubtful results were considered as positive (Se C-ELISA(2)=0.963, [0.904, 0.994], 95% CrI and Sp C-ELISA(2)=0.996, [0.986; 1.0], 95% CrI).

  16. Model Based Testing for Agent Systems

    Science.gov (United States)

    Zhang, Zhiyong; Thangarajah, John; Padgham, Lin

    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on model based unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system.

  17. Sensitivity and specificity of rapid influenza testing of children in a community setting 1

    Science.gov (United States)

    Stebbins, Samuel; Stark, James H.; Prasad, Ramakrishna; Thompson, William W.; Mitruka, Kiren; Rinaldo, Charles; Vukotich, Charles J.; Cummings, Derek A. T.

    2010-01-01

    Please cite this paper as: Stebbins et al. (2011) Sensitivity and specificity of rapid influenza testing of children in a community setting. Influenza and Other Respiratory Viruses 5(2), 104–109. Introduction  Rapid influenza testing (RFT) allows for a rapid point‐of‐care diagnosis of influenza. The Quidel QuickVue® Influenza A+B test (QuickVue) has a reported manufacturer’s sensitivity and specificity of 73% and 96%, respectively, with nasal swabs. However, investigators have shown sensitivities ranging from 22% to 77% in community settings. Methods  The QuickVue rapid influenza test was evaluated in a population of elementary (K‐5) school children, using testing in the home, as part of the Pittsburgh Influenza Prevention Project during the 2007–2008 influenza season. The QuickVue test was performed with nasal swab in full accordance with package instructions and compared with the results of nasal swab semi‐quantitative RT‐PCR. Results  Sensitivity of the QuickVue was found to be 27% in this sample. There was no statistically valid correlation between the semi‐quantitative PCR result and the QuickVue result. Conclusions  This study is consistent with the low sensitivity of the QuickVue test also reported by others. Viral load, technique, and the use of nasal swabs were examined as contributing factors but were not found to be explanations for this result. Community testing includes patients who are on the lower spectrum of illness which would not be the case in hospital or clinic samples. This suggests that RFT is less sensitive for patients at the lower spectrum of illness, with less severe disease. PMID:21306573

  18. TESTING MONETARY EXCHANGE RATE MODELS WITH PANEL COINTEGRATION TESTS

    Directory of Open Access Journals (Sweden)

    Szabo Andrea

    2015-07-01

    Full Text Available The monetary exchange rate models explain the long run behaviour of the nominal exchange rate. Their central assertion is that there is a long run equilibrium relationship between the nominal exchange rate and monetary macro-fundamentals. Although these models are essential tools of international macroeconomics, their empirical validity is ambiguous. Previously, time series testing was prevalent in the literature, but it did not bring convincing results. The power of the unit root and the cointegration tests are too low to reject the null hypothesis of no cointegration between the variables. This power can be enhanced by arranging our data in a panel data set, which allows us to analyse several time series simultaneously and enables us to increase the number of observations. We conducted a weak empirical test of the monetary exchange rate models by testing the existence of cointegration between the variables in three panels. We investigated 6, 10 and 15 OECD countries during the following periods: 1976Q1-2011Q4, 1985Q1-2011Q4 and 1996Q1-2011Q4. We tested the reduced form of the monetary exchange rate models in three specifications; we have two restricted models and an unrestricted model. Since cointegration can only be interpreted among non-stationary processes, we investigate the order of the integration of our variables with IPS, Fisher-ADF, Fisher-PP panel unit root tests and the Hadri panel stationary test. All the variables can be unit root processes; therefore we analyze the cointegration with the Pedroni and Kao panel cointegration test. The restricted models performed better than the unrestricted one and we obtained the best results with the 1985Q1-2011Q4 panel. The Kao test rejects the null hypotheses – there is no cointegration between the variables – in all the specifications and all the panels, but the Pedroni test does not show such a positive picture. Hence we found only moderate support for the monetary exchange rate models.

  19. Long-term repeatability of the skin prick test is high when supported by history or allergen-sensitivity tests

    DEFF Research Database (Denmark)

    Bødtger, Uffe; Jacobsen, C R; Poulsen, L K;

    2003-01-01

    BACKGROUND: Long-term reproducibility of the skin-prick test (SPT) has been questioned. The aim of the study was to investigate the clinical relevance of SPT changes. METHODS: SPT to 10 common inhalation allergens was performed annually from 1999 to 2001 in 25 nonsensitized and 21 sensitized...

  20. Comparison of sensitivity of QuantiFERON-TB gold test and tuberculin skin test in active pulmonary tuberculosis.

    Science.gov (United States)

    Khalil, Kanwal Fatima; Ambreen, Asma; Butt, Tariq

    2013-09-01

    To compare the sensitivity of tuberculin skin test (TST) and quantiFERON-TB gold test (QFT-G) in active pulmonary tuberculosis. Analytical study. Department of Pulmonology, Fauji Foundation Hospital, Rawalpindi, from July 2011 to January 2012. QuantiFERON-TB gold test (QFT-G) was evaluated and compared it with tuberculin skin test (TST) in 50 cases of active pulmonary tuberculosis, in whom tuberculous infection was suspected on clinical, radiological and microbiological grounds. Sensitivity was determined against postive growth for Mycobacterium tuberculosis. Out of 50 cases, 43 were females and 7 were males. The mean age was 41.84 ± 19.03 years. Sensitivity of QFT-G was 80% while that of TST was 28%. QFT-G has much higher sensitivity than TST for active pulmonary tuberculosis. It is unaffected by prior BCG administration and prior exposure to atypical mycobacteria. A positive QFT-G result can be an adjunct to diagnosis in patients having clinical and radiological data compatible with pulmonary tuberculosis.

  1. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  2. Modelling survival: exposure pattern, species sensitivity and uncertainty

    NARCIS (Netherlands)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Brink, Van Den Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.

    2016-01-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability

  3. Central sensitization phenomena after third molar surgery: A quantitative sensory testing study

    DEFF Research Database (Denmark)

    Juhl, Gitte Irene; Jensen, Troels Staehelin; Nørholt, Svend Erik;

    2008-01-01

    impacted third molar. RESULTS: Central sensitization for at least one week was indicated by significantly increased pain intensity evoked by intraoral repetitive pinprick and electrical stimulation (p...BACKGROUND: Surgical removal of third molars may carry a risk of developing persistent orofacial pain, and central sensitization appears to play an important role in the transition from acute to chronic pain. AIM: The aim of this study was to investigate sensitization (primarily central...... sensitization) after orofacial trauma using quantitative sensory testing (QST). METHODS: A total of 32 healthy men (16 patients and 16 age-matched control subjects) underwent a battery of quantitative tests adapted to the trigeminal area at baseline and 2, 7, and 30 days following surgical removal of a lower...

  4. The sensitivity of acoustic cough recording relative to intraesophageal pressure recording and patient report during reflux testing.

    Science.gov (United States)

    Rosen, R; Amirault, J; Heinz, N; Litman, H; Khatwa, U

    2014-11-01

    One of the primary indications for reflux testing with multichannel intraluminal impedance with pH (pH-MII) is to correlate reflux events with symptoms such as cough. Adult and pediatric studies have shown, using cough as a model, that patient report of symptoms is inaccurate. Unfortunately, intraesophageal pressure recording (IEPR) to record coughs is more invasive which limits its utility in children. The primary aim of this study was to validate the use of acoustic cough recording (ACR) during pH-MII testing. We recruited children undergoing pH-MII testing for the evaluation of cough. We simultaneously placed IEPR and pH-MII catheters and an ACR device in each patient. Each 24 h ACR, pH-MII, and IEPR tracing was scored by blinded investigators. Sensitivities for each method of symptom recording were calculated. A total of 2698 coughs were detected; 1140 were patient reported PR, 2425 were IEPR detected, and 2400 were ACR detected. The sensitivity of PR relative to ACR was 45.9% and the sensitivity of IEPR relative to ACR was 93.6%. There was strong inter-rater reliability (κ = 0.78) for the identification of cough by ACR. Acoustic recording is a non-invasive, sensitive method of recording cough during pH-MII testing that is well suited for the pediatric population. © 2014 John Wiley & Sons Ltd.

  5. An integrated electrochemical device based on immunochromatographic test strip and enzyme labels for sensitive detection of disease-related biomarkers

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Zhexiang; Wang, Jun; Wang, Hua; Li, Yao Q.; Lin, Yuehe

    2012-05-30

    A novel electrochemical biosensing device that integrates an immunochromatographic test strip and a screen-printed electrode (SPE) connected to a portable electrochemical analyzer was presented for rapid, sensitive, and quantitative detection of disease-related biomarker in human blood samples. The principle of the sensor is based on sandwich immunoreactions between a biomarker and a pair of its antibodies on the test strip, followed by highly sensitive square-wave voltammetry (SWV) detection. Horseradish peroxidase (HRP) was used as a signal reporter for electrochemical readout. Hepatitis B surface antigen (HBsAg) was employed as a model protein biomarker to demonstrate the analytical performance of the sensor in this study. Some critical parameters governing the performance of the sensor were investigated in detail. The sensor was further utilized to detect HBsAg in human plasma with an average recovery of 91.3%. In comparison, a colorimetric immunochromatographic test strip assay (ITSA) was also conducted. The result shows that the SWV detection in the electrochemical sensor is much more sensitive for the quantitative determination of HBsAg than the colorimetric detection, indicating that such a sensor is a promising platform for rapid and sensitive point-of-care testing/screening of disease-related biomarkers in a large population

  6. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  7. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    Science.gov (United States)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  8. Cost Modeling for SOC Modules Testing

    Directory of Open Access Journals (Sweden)

    Balwinder Singh

    2013-08-01

    Full Text Available The complexity of the system design is increasing very rapidly as the number of transistors on Integrated Circuits (IC doubles as per Moore’s law.There is big challenge of testing this complex VLSI circuit, in which whole system is integrated into a single chip called System on Chip (SOC. Cost of testing the SOC is also increasing with complexity. Cost modeling plays a vital role in reduction of test cost and time to market. This paper includes the cost modeling of the SOC Module testing which contains both analog and digital modules. The various test cost parameters and equations are considered from the previous work. The mathematical relations are developed for cost modeling to test the SOC further cost modeling equations are modeled in Graphical User Interface (GUI in MATLAB, which can be used as a cost estimation tool. A case study is done to calculate the cost of the SOC testing due to Logic Built in Self Test (LBIST and Memory Built in Self Test (MBIST. VLSI Test engineers can take the benefits of such cost estimation tools for test planning.

  9. Sensitivity Analysis of the ALMANAC Model's Input Variables

    Institute of Scientific and Technical Information of China (English)

    XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da

    2002-01-01

    Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.

  10. A simple method for modeling dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Son, Min-Kyu [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of); Seo, Hyunwoong [Graduate School of Information Science and Electrical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan); Center of Plasma Nano-interface Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan); Lee, Kyoung-Jun; Kim, Soo-Kyoung; Kim, Byung-Man; Park, Songyi; Prabakar, Kandasamy [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of); Kim, Hee-Je, E-mail: heeje@pusan.ac.kr [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of)

    2014-03-03

    Dye-sensitized solar cells (DSCs) are photoelectrochemical photovoltaics based on complicated electrochemical reactions. The modeling and simulation of DSCs are powerful tools for evaluating the performance of DSCs according to a range of factors. Many theoretical methods are used to simulate DSCs. On the other hand, these methods are quite complicated because they are based on a difficult mathematical formula. Therefore, this paper suggests a simple and accurate method for the modeling and simulation of DSCs without complications. The suggested simulation method is based on extracting the coefficient from representative cells and a simple interpolation method. This simulation method was implemented using the power electronic simulation program and C-programming language. The performance of DSCs according to the TiO{sub 2} thickness was simulated, and the simulated results were compared with the experimental data to confirm the accuracy of this simulation method. The suggested modeling strategy derived the accurate current–voltage characteristics of the DSCs according to the TiO{sub 2} thickness with good agreement between the simulation and the experimental results. - Highlights: • Simple modeling and simulation method for dye-sensitized solar cells (DSCs). • Modeling done using a power electronic simulation program and C-programming language. • The performance of DSC according to the TiO{sub 2} thickness was simulated. • Simulation and experimental performance of DSCs were compared. • This method is suitable for accurate simulation of DSCs.

  11. Biglan Model Test Based on Institutional Diversity.

    Science.gov (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  12. Graphical Models and Computerized Adaptive Testing.

    Science.gov (United States)

    Mislevy, Robert J.; Almond, Russell G.

    This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…

  13. Multivariate Model for Test Response Analysis

    NARCIS (Netherlands)

    Krishnan, Shaji; Krishnan, Shaji; Kerkhoff, Hans G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage

  14. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have a sim...

  15. Multivariate model for test response analysis

    NARCIS (Netherlands)

    Krishnan, S.; Kerkhoff, H.G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai

  16. Port Adriano, 2D-Model Tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke; Jensen, Palle Meinert

    This report present the results of 2D physical model tests (length scale 1:50) carried out in a waveflume at Dept. of Civil Engineering, Aalborg University (AAU).......This report present the results of 2D physical model tests (length scale 1:50) carried out in a waveflume at Dept. of Civil Engineering, Aalborg University (AAU)....

  17. Multivariate model for test response analysis

    NARCIS (Netherlands)

    Krishnan, S.; Kerkhoff, H.G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai

  18. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    Lyu Kehong; Tan Xiaodong; Liu Guanjun; Zhao Chenxu

    2014-01-01

    In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs) are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann-Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI) through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitor-ing of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  19. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  20. Quantifying sensitivity to droughts – an experimental modeling approach

    Directory of Open Access Journals (Sweden)

    M. Staudinger

    2014-07-01

    Full Text Available Meteorological droughts like those in summer 2003 or spring 2011 in Europe are expected to become more frequent in the future. Although the spatial extent of these drought events was large, not all regions were affected in the same way. Many catchments reacted strongly to the meteorological droughts showing low levels of streamflow and groundwater, while others hardly reacted. The extent of the hydrological drought for specific catchments was also different between these two historical events due to different initial conditions and drought propagation processes. This leads to the important question of how to detect and quantify the sensitivity of a catchment to meteorological droughts. To assess this question we designed hydrological model experiments using a conceptual rainfall–runoff model. Two drought scenarios were constructed by selecting precipitation and temperature observations based on certain criteria: one scenario was a modest but constant progression of drying based on sorting the years of observations according to annual precipitation amounts. The other scenario was a more extreme progression of drying based on selecting months from different years, forming a year with the wettest months through to a year with the driest months. Both scenarios retained the typical intra-annual seasonality for the region. The sensitivity of 24 Swiss catchments to these scenarios was evaluated by analyzing the simulated discharge time series and modeled storages. Mean catchment elevation, slope and size were found to be the main controls on the sensitivity of catchment discharge to precipitation. Generally, catchments at higher elevation and with steeper slopes seemed to be less sensitive to meteorological droughts than catchments at lower elevations with less steep slopes.

  1. Testing Models for Structure Formation

    CERN Document Server

    Kaiser, N

    1993-01-01

    I review a number of tests of theories for structure formation. Large-scale flows and IRAS galaxies indicate a high density parameter $\\Omega \\simeq 1$, in accord with inflationary predictions, but it is not clear how this meshes with the uniformly low values obtained from virial analysis on scales $\\sim$ 1Mpc. Gravitational distortion of faint galaxies behind clusters allows one to construct maps of the mass surface density, and this should shed some light on the large vs small-scale $\\Omega$ discrepancy. Power spectrum analysis reveals too red a spectrum (compared to standard CDM) on scales $\\lambda \\sim 10-100$ $h^{-1}$Mpc, but the gaussian fluctuation hypothesis appears to be in good shape. These results suggest that the problem for CDM lies not in the very early universe --- the inflationary predictions of $\\Omega = 1$ and gaussianity both seem to be OK; furthermore, the COBE result severely restricts modifications such as tilting the primordial spectrum --- but in the assumed matter content. The power s...

  2. Data sensitivity in a hybrid STEP/Coulomb model for aftershock forecasting

    Science.gov (United States)

    Steacy, S.; Jimenez Lloret, A.; Gerstenberger, M.

    2014-12-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip, and we also examine how the choice of receiver plane geometry affects the results. We find that the results are strongly sensitive to the slip models and moderately sensitive to the choice of receiver orientation. We further find that comparison of the stress fields (resulting from the slip models) with the location of events in the learning period provides advance information on whether or not a particular hybrid model will perform better than STEP.

  3. Considerations for parameter optimization and sensitivity in climate models.

    Science.gov (United States)

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models.

  4. Semantic-Sensitive Web Information Retrieval Model for HTML Documents

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    With the advent of the Internet, a new era of digital information exchange has begun. Currently, the Internet encompasses more than five billion online sites and this number is exponentially increasing every day. Fundamentally, Information Retrieval (IR) is the science and practice of storing documents and retrieving information from within these documents. Mathematically, IR systems are at the core based on a feature vector model coupled with a term weighting scheme that weights terms in a document according to their significance with respect to the context in which they appear. Practically, Vector Space Model (VSM), Term Frequency (TF), and Inverse Term Frequency (IDF) are among other long-established techniques employed in mainstream IR systems. However, present IR models only target generic-type text documents, in that, they do not consider specific formats of files such as HTML web documents. This paper proposes a new semantic-sensitive web information retrieval model for HTML documents. It consists of a...

  5. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  6. The Couplex test cases: models and lessons

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeat, A. [Lyon-1 Univ., MCS, 69 - Villeurbanne (France); Kern, M. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Schumacher, S.; Talandier, J. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)

    2003-07-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  7. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  8. A Simple in-vitro Test for Assessing the Sensitivity of Lymphocytes to Chlorambucil

    Science.gov (United States)

    Lawler, Sylvia D.; Lele, Kusum P.; Pentycross, C. R.

    1971-01-01

    The sensitivity of lymphocytes to chlorambucil has been assessed by a simple in-vitro test which has been applied to the cells of normal controls and of patients with chronic lymphocytic leukaemia. The degree of sensitivity varied amongst the normal controls and in-vitro resistance of the lymphocytes in the patients was sometimes found in the absence of in-vivo experience of the drug. Resistance in-vitro tended to be associated with very high total peripheral blood lymphocyte counts but not with the age of the patient. Where the information was available the in-vitro sensitivity test agreed with the results of biochemical estimations of drug resistance and with the clinical responses to the drug. It is suggested that this test may have applications in patient management. PMID:5144524

  9. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity.

    Science.gov (United States)

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination.

  10. Screening Test for Detection of Leptinotarsa decemlineata (Say Sensitivity to Insecticides

    Directory of Open Access Journals (Sweden)

    Dušanka Inđić

    2012-01-01

    Full Text Available In 2009, the sensitivity of 15 field populations of Colorado potato beetle (Leptinotarsadecemlineata Say. - CPB was assessed to chlorpyrifos, cypermethrin, thiamethoxam and fipronil,four insecticides which are mostly used for its control in Serbia. Screening test that allows rapidassessment of sensitivity of overwintered adults to insecticides was performed. Insecticideswere applied at label rates, and two, five and 10 fold higher rates by soaking method (5 sec.Mortality was assessed after 72h. From 15 monitored populations of CPB, two were sensitiveto label rate of chlorpyrifos, one was slightly resistant, 11 were resistant and one populationwas highly resistant. Concerning cypermethrin, two populations were sensitive, two slightlyresistant, five were resistant and six highly resistant. Highly sensitive to thiamethoxam labelrate were 12 populations, while three were sensitive. In the case of fipronil applied at label rate,two populations were highly sensitive, six sensitive, one slightly resistant and six were resistant.The application of insecticides at higher rates (2, 5 and 10 fold, that is justified only in bioassays,provided a rapid insight into sensitivity of field populations of CPB to insecticides.

  11. Initial results of sensitivity tests - Performed on the RE-1000 free-piston Stirling engine

    Science.gov (United States)

    Schreiber, J. G.

    1984-01-01

    Tests have been performed over several years to investigate the dynamics of a free-piston Stirling engine for the purpose of computer code validation. Tests on the 1 kW (1.33 hp) single cylinder engine have involved the determination of the sensitivity of the engine performance to variations in working space pressure, heater and cooler temperatures, regenerator porosity, power piston mass, and displacer dynamics. Maps of engine performance have been recorded with the use of an 81.2 percent porosity regenerator. Both a high-efficiency displacer and a high-power displacer were tested; efficiencies up to 33 percent were recorded, and power output of approximately 1500 W was obtained. Preliminary results of the sensitivity tests are presented, and descriptions of future tests are given.

  12. Evaluating the sensitivity and predictive value of tests of recent infection: toxoplasmosis in pregnancy.

    Science.gov (United States)

    Ades, A E

    1991-12-01

    The diagnosis of maternal infection in early pregnancy depends on tests which are sensitive to recent infection, such as specific IgM. Two types of test are considered: those where the response persists for a period following infection and then declines, such as IgM, and those whose response increases with time since infection, such as IgG-avidity. However, individuals vary in their response to infection, and it may not always be possible to determine whether an infection occurred during pregnancy or before it. Mathematical methods are developed to evaluate the performance of these tests, and are applied to the diagnosis of toxoplasmosis in pregnancy. It is shown that, based on existing information, tests of recent infection are unlikely to be both sensitive and predictive. More data on these tests are required, before they can be reliably used to determine whether infection has occurred during pregnancy or before it.

  13. FABASOFT BEST PRACTICES AND TEST METRICS MODEL

    Directory of Open Access Journals (Sweden)

    Nadica Hrgarek

    2007-06-01

    Full Text Available Software companies have to face serious problems about how to measure the progress of test activities and quality of software products in order to estimate test completion criteria, and if the shipment milestone will be reached on time. Measurement is a key activity in testing life cycle and requires established, managed and well documented test process, defined software quality attributes, quantitative measures, and using of test management and bug tracking tools. Test metrics are a subset of software metrics (product metrics, process metrics and enable the measurement and quality improvement of test process and/or software product. The goal of this paper is to briefly present Fabasoft best practices and lessons learned during functional and system testing of big complex software products, and to describe a simple test metrics model applied to the software test process with the purpose to better control software projects, measure and increase software quality.

  14. Sensitivity in forward modeled hyperspectral reflectance due to phytoplankton groups

    Science.gov (United States)

    Manzo, Ciro; Bassani, Cristiana; Pinardi, Monica; Giardino, Claudia; Bresciani, Mariano

    2016-04-01

    Phytoplankton is an integral part of the ecosystem, affecting trophic dynamics, nutrient cycling, habitat condition, and fisheries resources. The types of phytoplankton and their concentrations are used to describe the status of water and the processes inside of this. This study investigates bio-optical modeling of phytoplankton functional types (PFT) in terms of pigment composition demonstrating the capability of remote sensing to recognize freshwater phytoplankton. In particular, a sensitivity analysis of simulated hyperspectral water reflectance (with band setting of HICO, APEX, EnMAP, PRISMA and Sentinel-3) of productive eutrophic waters of Mantua lakes (Italy) environment is presented. The bio-optical model adopted for simulating the hyperspectral water reflectance takes into account the reflectance dependency on geometric conditions of light field, on inherent optical properties (backscattering and absorption coefficients) and on concentrations of water quality parameters (WQPs). The model works in the 400-750nm wavelength range, while the model parametrization is based on a comprehensive dataset of WQP concentrations and specific inherent optical properties of the study area, collected in field surveys carried out from May to September of 2011 and 2014. The following phytoplankton groups, with their specific absorption coefficients, a*Φi(λ), were used during the simulation: Chlorophyta, Cyanobacteria with phycocyanin, Cyanobacteria and Cryptophytes with phycoerythrin, Diatoms with carotenoids and mixed phytoplankton. The phytoplankton absorption coefficient aΦ(λ) is modelled by multiplying the weighted sum of the PFTs, Σpia*Φi(λ), with the chlorophyll-a concentration (Chl-a). To highlight the variability of water reflectance due to variation of phytoplankton pigments, the sensitivity analysis was performed by keeping constant the WQPs (i.e., Chl-a=80mg/l, total suspended matter=12.58g/l and yellow substances=0.27m-1). The sensitivity analysis was

  15. Age sensitivity of behavioral tests and brain substrates of normal aging in mice.

    Science.gov (United States)

    Kennard, John A; Woodruff-Pak, Diana S

    2011-01-01

    Knowledge of age sensitivity, the capacity of a behavioral test to reliably detect age-related changes, has utility in the design of experiments to elucidate processes of normal aging. We review the application of these tests in studies of normal aging and compare and contrast the age sensitivity of the Barnes maze, eyeblink classical conditioning, fear conditioning, Morris water maze, and rotorod. These tests have all been implemented to assess normal age-related changes in learning and memory in rodents, which generalize in many cases to age-related changes in learning and memory in all mammals, including humans. Behavioral assessments are a valuable means to measure functional outcomes of neuroscientific studies of aging. Highlighted in this review are the attributes and limitations of these measures in mice in the context of age sensitivity and processes of brain aging. Attributes of these tests include reliability and validity as assessments of learning and memory, well-defined neural substrates, and sensitivity to neural and pharmacological manipulations and disruptions. These tests engage the hippocampus and/or the cerebellum, two structures centrally involved in learning and memory that undergo functional and anatomical changes in normal aging. A test that is less well represented in studies of normal aging, the context pre-exposure facilitation effect (CPFE) in fear conditioning, is described as a method to increase sensitivity of contextual fear conditioning to changes in the hippocampus. Recommendations for increasing the age sensitivity of all measures of normal aging in mice are included, as well as a discussion of the potential of the under-studied CPFE to advance understanding of subtle hippocampus-mediated phenomena.

  16. Age Sensitivity of Behavioral Tests and Brain Substrates of Normal Aging in Mice

    Directory of Open Access Journals (Sweden)

    John A. Kennard

    2011-05-01

    Full Text Available Knowledge of age sensitivity, the capacity of a behavioral test to reliably detect age-related changes, has utility in the design of experiments to elucidate processes of normal aging. We review the application of these tests in studies of normal aging and compare and contrast the age sensitivity of the Barnes maze, eyeblink classical conditioning, fear conditioning, Morris water maze and rotorod. These tests have all been implemented to assess normal age-related changes in learning and memory in rodents, which generalize in many cases to age-related changes in learning and memory in all mammals, including humans. Behavioral assessments are a valuable means to measure functional outcomes of neuroscientific studies of aging. Highlighted in this review are the attributes and limitations of these measures in mice in the context of age sensitivity and processes of brain aging. Attributes of these tests include reliability and validity as assessments of learning and memory, well-defined neural substrates, and sensitivity to neural and pharmacological manipulations and disruptions. These tests engage the hippocampus and/or the cerebellum, two structures centrally involved in learning and memory that undergo functional and anatomical changes in normal aging. A test that is less well represented in studies of normal aging, the context pre-exposure facilitation effect (CPFE in fear conditioning, is described as a method to increase sensitivity of contextual fear conditioning to changes in the hippocampus. Recommendations for increasing the age sensitivity of all measures of normal aging in mice are included, as well as a discussion of the potential of the under-studied CPFE to advance understanding of subtle hippocampus-mediated phenomena.

  17. The Wave Dragon: tests on a modified model

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, Luca; Frigaard, Peter

    1999-09-01

    A modified floating model of the Wave Dragon was tested for movements, overtopping and forces on critical positions. The modifications and consequent testing of the model are part of a R and D programme. 18 tests (repetitions included) were carried out during May 1999. Forces in 7 different positions and movements for three degrees of freedom (heave, pitch and surge) were recorded for 7 wave situations. Total overtopping was measured for 5 different wave situations. Furthermore influence of crest freeboard was tested. Sensitivity to the energy spreading in multidirectional seas was investigated. A typical exponential equation describing overtopping was fitted to the data in case of frequent wave conditions. The formula is compared to the present tests. (au)

  18. Temperature sensitivity of a numerical pollen forecast model

    Science.gov (United States)

    Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone

    2016-04-01

    Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.

  19. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  20. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...... are the most significant in each case. We apply the Sobol method, which is a quantitative method that gives the percentage of the total output variance that each parameter accounts for. The most important parameter is found to be the energy release rate that explains 92% of the uncertainty in the calculated...... results for the period before thermal penetration (tp) has occurred. The analysis is also done for all combinations of two parameters in order to find the combination with the largest effect. The Sobol total for pairs had the highest value for the combination of energy release rate and area of opening...

  1. A new framework for the interpretation of IgE sensitization tests

    DEFF Research Database (Denmark)

    Roberts, G; Ollert, M; Aalberse, R.;

    2016-01-01

    IgE sensitization tests, such as skin prick testing and serum-specific IgE, have been used to diagnose IgE-mediated clinical allergy for many years. Their prime drawback is that they detect sensitization which is only loosely related to clinical allergy. Many patients therefore require provocation...... pretest probabilities for diverse setting, regions and allergens. Also, cofactors, such as exercise, may be necessary for exposure to an allergen to result in an allergic reaction in specific IgE-positive patients. The diagnosis of IgE-mediated allergy is now being aided by the introduction of allergen...

  2. Use of genotoxicity information in the development of integrated testing strategies (ITS) for skin sensitization.

    Science.gov (United States)

    Mekenyan, Ovanes; Patlewicz, Grace; Dimitrova, Gergana; Kuseva, Chanita; Todorov, Milen; Stoeva, Stoyanka; Kotov, Stefan; Donner, E Maria

    2010-10-18

    Skin sensitization is an end point of concern for various legislation in the EU, including the seventh Amendment to the Cosmetics Directive and Registration Evaluation, Authorisation and Restriction of Chemicals (REACH). Since animal testing is a last resort for REACH or banned (from 2013 onward) for the Cosmetics Directive, the use of intelligent/integrated testing strategies (ITS) as an efficient means of gathering necessary information from alternative sources (e.g., in vitro, (Q)SARs, etc.) is gaining widespread interest. Previous studies have explored correlations between mutagenicity data and skin sensitization data as a means of exploiting information from surrogate end points. The work here compares the underlying chemical mechanisms for mutagenicity and skin sensitization in an effort to evaluate the role mutagenicity information can play as a predictor of skin sensitization potential. The Tissue Metabolism Simulator (TIMES) hybrid expert system was used to compare chemical mechanisms of both end points since it houses a comprehensive set of established structure-activity relationships for both skin sensitization and mutagenicity. The evaluation demonstrated that there is a great deal of overlap between skin sensitization and mutagenicity structural alerts and their underlying chemical mechanisms. The similarities and differences in chemical mechanisms are discussed in light of available experimental data. A number of new alerts for mutagenicity were also postulated for inclusion into TIMES. The results presented show that mutagenicity information can provide useful insights on skin sensitization potential as part of an ITS and should be considered prior to any in vivo skin sensitization testing being initiated.

  3. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    Science.gov (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  4. Open source software implementation of an integrated testing strategy for skin sensitization potency based on a Bayesian network.

    Science.gov (United States)

    Pirone, Jason R; Smith, Marjolein; Kleinstreuer, Nicole C; Burns, Thomas A; Strickland, Judy; Dancik, Yuri; Morris, Richard; Rinckel, Lori A; Casey, Warren; Jaworska, Joanna S

    2014-01-01

    An open-source implementation of a previously published integrated testing strategy (ITS) for skin sensitization using a Bayesian network has been developed using R, a free and open-source statistical computing language. The ITS model provides probabilistic predictions of skin sensitization potency based on in silico and in vitro information as well as skin penetration characteristics from a published bioavailability model (Kasting et al., 2008). The structure of the Bayesian network was designed to be consistent with the adverse outcome pathway published by the OECD (Jaworska et al., 2011, 2013). In this paper, the previously published data set (Jaworska et al., 2013) is improved by two data corrections and a modified application of the Kasting model. The new data set implemented in the original commercial software package and the new R version produced consistent results. The data and a fully documented version of the code are publicly available (http://ntp.niehs.nih.gov/go/its).

  5. An efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Science.gov (United States)

    Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.

    2014-04-01

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.

  6. The Vanishing Tetrad Test: Another Test of Model Misspecification

    Science.gov (United States)

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  7. The Vanishing Tetrad Test: Another Test of Model Misspecification

    Science.gov (United States)

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  8. Respiratory panic disorder subtype and sensitivity to the carbon dioxide challenge test

    Directory of Open Access Journals (Sweden)

    Valença A.M.

    2002-01-01

    Full Text Available The aim of the present study was to verify the sensitivity to the carbon dioxide (CO2 challenge test of panic disorder (PD patients with respiratory and nonrespiratory subtypes of the disorder. Our hypothesis is that the respiratory subtype is more sensitive to 35% CO2. Twenty-seven PD subjects with or without agoraphobia were classified into respiratory and nonrespiratory subtypes on the basis of the presence of respiratory symptoms during their panic attacks. The tests were carried out in a double-blind manner using two mixtures: 1 35% CO2 and 65% O2, and 2 100% atmospheric compressed air, 20 min apart. The tests were repeated after 2 weeks during which the participants in the study did not receive any psychotropic drugs. At least 15 of 16 (93.7% respiratory PD subtype patients and 5 of 11 (43.4% nonrespiratory PD patients had a panic attack during one of two CO2 challenges (P = 0.009, Fisher exact test. Respiratory PD subtype patients were more sensitive to the CO2 challenge test. There was agreement between the severity of PD measured by the Clinical Global Impression (CGI Scale and the subtype of PD. Higher CGI scores in the respiratory PD subtype could reflect a greater sensitivity to the CO2 challenge due to a greater severity of PD. Carbon dioxide challenges in PD may define PD subtypes and their underlying mechanisms.

  9. Sensitivity and specificity of parallel or serial serological testing for detection of canine Leishmania infection

    Directory of Open Access Journals (Sweden)

    Mauro Maciel de Arruda

    2016-01-01

    Full Text Available In Brazil, human and canine visceral leishmaniasis (CVL caused byLeishmania infantum has undergone urbanisation since 1980, constituting a public health problem, and serological tests are tools of choice for identifying infected dogs. Until recently, the Brazilian zoonoses control program recommended enzyme-linked immunosorbent assays (ELISA and indirect immunofluorescence assays (IFA as the screening and confirmatory methods, respectively, for the detection of canine infection. The purpose of this study was to estimate the accuracy of ELISA and IFA in parallel or serial combinations. The reference standard comprised the results of direct visualisation of parasites in histological sections, immunohistochemical test, or isolation of the parasite in culture. Samples from 98 cases and 1,327 noncases were included. Individually, both tests presented sensitivity of 91.8% and 90.8%, and specificity of 83.4 and 53.4%, for the ELISA and IFA, respectively. When tests were used in parallel combination, sensitivity attained 99.2%, while specificity dropped to 44.8%. When used in serial combination (ELISA followed by IFA, decreased sensitivity (83.3% and increased specificity (92.5% were observed. Serial testing approach improved specificity with moderate loss in sensitivity. This strategy could partially fulfill the needs of public health and dog owners for a more accurate diagnosis of CVL.

  10. Factors influencing antibiotic prescribing habits and use of sensitivity testing amongst veterinarians in Europe

    Science.gov (United States)

    De Briyne, N.; Atkinson, J.; Pokludová, L.; Borriello, S. P.; Price, S.

    2013-01-01

    The Heads of Medicines Agencies and the Federation of Veterinarians of Europe undertook a survey to gain a better insight into the decision-making process of veterinarians in Europe when deciding which antibiotics to prescribe. The survey was completed by 3004 practitioners from 25 European countries. Analysis was to the level of different types of practitioner (food producing (FP) animals, companion animals, equines) and country for Belgium, Czech Republic, France, Germany, Spain, Sweden and the UK. Responses indicate no single information source is universally considered critical, though training, published literature and experience were the most important. Factors recorded which most strongly influenced prescribing behaviour were sensitivity tests, own experience, the risk for antibiotic resistance developing and ease of administration. Most practitioners usually take into account responsible use warnings. Antibiotic sensitivity testing is usually performed where a treatment failure has occurred. Significant differences were observed in the frequency of sensitivity testing at the level of types of practitioners and country. The responses indicate a need to improve sensitivity tests and services, with the availability of rapid and cheaper testing being key factors. PMID:24068699

  11. A sensitive venous bleeding model in haemophilia A mice

    DEFF Research Database (Denmark)

    Pastoft, Anne Engedahl; Lykkesfeldt, Jens; Ezban, M.

    2012-01-01

    Haemostatic effect of compounds for treating haemophilia can be evaluated in various bleeding models in haemophilic mice. However, the doses of factor VIII (FVIII) for normalizing bleeding used in some of these models are reported to be relatively high. The aim of this study was to establish...... a sensitive venous bleeding model in FVIII knock out (F8-KO) mice, with the ability to detect effect on bleeding at low plasma FVIII concentrations. We studied the effect of two recombinant FVIII products, N8 and Advate(®), after injury to the saphenous vein. We found that F8-KO mice treated with increasing...... doses of either N8 or Advate(®) showed a dose-dependent increase in the number of clot formations and a reduction in both average and maximum bleeding time, as well as in average blood loss. For both compounds, significant effect was found at doses as low as 5 IU kg(-1) when compared with vehicle...

  12. Sensitivity of the STAT-VIEW rapid self-test and implications for use during acute HIV infection.

    Science.gov (United States)

    Boukli, Narjis; Boyd, Anders; Wendremaire, Noémie; Girard, Pierre-Marie; Bottero, Julie; Morand-Joubert, Laurence

    2017-08-23

    HIV testing is an important step towards diminishing incident infections. Rapid self-tests whose use is becoming more common in France could help increase access to testing, yet could fail to diagnose HIV during acute HIV infection (AHI). The aim of the present study was to evaluate HIV-detection sensitivity of a commonly used rapid self-test (STAT-VIEW HIV1/2), compared with another point-of-care rapid test (INSTI), among patients presenting with AHI. Individuals tested at Saint-Antoine Hospital (Paris, France) with negative or indeterminate western blot (WB) results and detectable HIV-RNA were included. Rapid tests were performed retrospectively on stored serum. Patients with and without reactive rapid tests were compared, while probability of having a reactive test was modelled across infection duration using logistic regression. Of the 40 patients with AHI, 23 (57.5%) had a reactive STAT-VIEW rapid test. Patients with non-reactive versus reactive tests had a significantly shorter median time since infection (p=0.01), time since onset of symptoms (p=0.009), higher proportion with Fiebig stage III versus IV (p=0.003), negative WB results (p=0.007), higher HIV-RNA levels (p=0.001) and lower CD4+ and CD8+ cell count (p=0.03, prapid self-test when performed on serum samples. Considering that detection sensitivity increased substantially over infection time, individuals should not rely on a negative result to accurately exclude HIV infection within at least 5 weeks of potential HIV exposure. Notwithstanding strong recommendations against rapid test use during AHI, some utility in detecting HIV is observed 5-12 weeks after transmission. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Basin-scale Modeling of Geological Carbon Sequestration: Model Complexity, Injection Scenario and Sensitivity Analysis

    Science.gov (United States)

    Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.

    2013-12-01

    Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and

  14. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  15. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  16. Cost Modeling for SOC Modules Testing

    OpenAIRE

    Balwinder Singh; Arun Khosla; Sukhleen B. Narang

    2013-01-01

    The complexity of the system design is increasing very rapidly as the number of transistors on Integrated Circuits (IC) doubles as per Moore’s law.There is big challenge of testing this complex VLSI circuit, in which whole system is integrated into a single chip called System on Chip (SOC). Cost of testing the SOC is also increasing with complexity. Cost modeling plays a vital role in reduction of test cost and time to market. This paper includes the cost modeling of the SOC Module testing...

  17. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  18. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  19. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  20. Consistency and construction in stated WTP for health risk reductions: A novel scope-sensitivity test

    Energy Technology Data Exchange (ETDEWEB)

    Bateman, Ian J. [Centre for Social and Economic Research on the Global Environment (CSERGE), School of Environmental Sciences, University of East Anglia, Norwich NR4 7TJ (United Kingdom); Brouwer, Roy [Institute for Environmental Studies (IVM), Vrije Universiteit, De Boelelaan 1087, 1081 HV Amsterdam (Netherlands)

    2006-08-15

    A contingent valuation study is conducted to estimate willingness to pay (WTP) for reducing skin cancer risks. A split sample design contrasts dichotomous choice (DC) with open-ended (OE) methods for eliciting WTP. A novel scope test varies the remit of risk reductions from just the individual respondent to their entire household allowing us to examine both the statistical significance and scale of scope sensitivity. While OE responses fail such tests, DC responses pass both forms of testing. We conclude that conformity of the size of scope effects with prior expectations should form a focus for future validity testing. (author)

  1. EU-approved rapid tests for bovine spongiform encephalopathy detect atypical forms: a study for their sensitivities.

    Directory of Open Access Journals (Sweden)

    Daniela Meloni

    Full Text Available Since 2004 it become clear that atypical bovine spongiform encephalopthies (BSEs exist in cattle. Whenever their detection has relied on active surveillance plans implemented in Europe since 2001 by rapid tests, the overall and inter-laboratory performance of these diagnostic systems in the detection of the atypical strains has not been studied thoroughly to date. To fill this gap, the present study reports on the analytical sensitivity of the EU-approved rapid tests for atypical L- and H-type and classical BSE in parallel. Each test was challenged with two dilution series, one created from a positive pool of the three BSE forms according to the EURL standard method of homogenate preparation (50% w/v and the other as per the test kit manufacturer's instructions. Multilevel logistic models and simple logistic models with the rapid test as the only covariate were fitted for each BSE form analyzed as directed by the test manufacturer's dilution protocol. The same schemes, but excluding the BSE type, were then applied to compare test performance under the manufacturer's versus the water protocol. The IDEXX HerdChek ® BSE-scrapie short protocol test showed the highest sensitivity for all BSE forms. The IDEXX® HerdChek BSE-scrapie ultra short protocol, the Prionics®--Check WESTERN and the AJ Roboscreen® BetaPrion tests showed similar sensitivities, followed by the Roche® PrionScreen, the Bio-Rad® TeSeE™ SAP and the Prionics®--Check PrioSTRIP in descending order of analytical sensitivity. Despite these differences, the limit of detection of all seven rapid tests against the different classes of material set within a 2 log(10 range of the best-performing test, thus meeting the European Food Safety Authority requirement for BSE surveillance purposes. These findings indicate that not many atypical cases would have been missed surveillance since 2001 which is important for further epidemiological interpretations of the sporadic character of

  2. Sensitivity of Occupant Response Subject to Prescribed Corridors for Impact Testing

    Directory of Open Access Journals (Sweden)

    J.R. Crandall

    1996-01-01

    Full Text Available A technology to study the sensitivity of impact responses to prescribed test conditions is presented. Motor vehicle impacts are used to illustrate the principles of this sensitivity technology. Impact conditions are regulated by specifying either a corridor for the acceleration time history or other test parameters such as velocity change, static crush distance, and pulse duration. By combining a time domain constrained optimization method and a multirigid body dynamics simulator, the upper and lower bounds of occupant responses subject to the regulated corridors were obtained. It was found that these prescribed corridors may be either so wide as to allow extreme variations in occupant response or so narrow that they are physically unrealizable in the laboratory test environment. A new corridor based on specifications for the test parameters of acceleration, velocity. crush distance, and duration for frontal vehicle impacts is given.

  3. Earth system sensitivity inferred from Pliocene modelling and data

    Science.gov (United States)

    Lunt, D.J.; Haywood, A.M.; Schmidt, G.A.; Salzmann, U.; Valdes, P.J.; Dowsett, H.J.

    2010-01-01

    Quantifying the equilibrium response of global temperatures to an increase in atmospheric carbon dioxide concentrations is one of the cornerstones of climate research. Components of the Earths climate system that vary over long timescales, such as ice sheets and vegetation, could have an important effect on this temperature sensitivity, but have often been neglected. Here we use a coupled atmosphere-ocean general circulation model to simulate the climate of the mid-Pliocene warm period (about three million years ago), and analyse the forcings and feedbacks that contributed to the relatively warm temperatures. Furthermore, we compare our simulation with proxy records of mid-Pliocene sea surface temperature. Taking these lines of evidence together, we estimate that the response of the Earth system to elevated atmospheric carbon dioxide concentrations is 30-50% greater than the response based on those fast-adjusting components of the climate system that are used traditionally to estimate climate sensitivity. We conclude that targets for the long-term stabilization of atmospheric greenhouse-gas concentrations aimed at preventing a dangerous human interference with the climate system should take into account this higher sensitivity of the Earth system. ?? 2010 Macmillan Publishers Limited. All rights reserved.

  4. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  5. Understanding earth system models: how Global Sensitivity Analysis can help

    Science.gov (United States)

    Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.

  6. Evaluation of the sensitivity of freshwater organisms used in toxicity tests of wastewater from explosives company.

    Science.gov (United States)

    Ribeiro, Elaine Nolasco; da Silva, Flávio Teixeira; de Paiva, Teresa Cristina Brazil

    2012-10-01

    Explosives industries are a source of toxic discharge. The aim of this study was to compare organisms sensitivity (Daphnia similis, Danio rerio, Escherichia coli and Pseudomonas putida) in detecting acute toxicity in wastewater from two explosives, 2,4,6-TNT (TNT) and nitrocellulose. The samples were collected from an explosives company in the Paraiba Valley, São Paulo, Brazil. The effluents from TNT and nitrocellulose production were very toxic for tested organisms. Statistical tests indicated that D. similis and D. rerio were the most sensitive organisms for toxicity detection in effluents from 2,4,6-TNT and nitrocellulose production. The P. putida bacteria was the organism considered the least sensitive in indicating toxicity in effluents from nitrocellulose.

  7. Moisture sensitivity examination of asphalt mixtures using thermodynamic, direct adhesion peel and compacted mixture mechanical tests

    OpenAIRE

    Zhang, Jizhe; AIREY, Gordon D; Grenfell, James; Apeagyei, Alex K.

    2016-01-01

    Moisture damage in asphalt mixtures is a complicated mode of pavement distress that results in the loss of stiffness and structural strength of the asphalt pavement layers. This paper evaluated the moisture sensitivity of different aggregate–bitumen combinations through three different approaches: surface energy, peel adhesion and the Saturation Ageing Tensile Stiffness (SATS) tests. In addition, the results obtained from these three tests were compared so as to characterise the relationship ...

  8. Models for patients' recruitment in clinical trials and sensitivity analysis.

    Science.gov (United States)

    Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas

    2012-07-20

    Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.

  9. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  10. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  11. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  12. Rapid, high sensitivity, point-of-care test for cardiac troponin based on optomagnetic biosensor

    NARCIS (Netherlands)

    Dittmer, W.U.; Evers, T.H.; Hardeman, W.M.; Huijnen-Keur, W.M.; Kamps, R.; De Kievit, P.; Neijzen, J.H.M.; Sijbers, M.J.J.; Nieuwenhuis, J.H.; Hefti, M.H.; Dekkers, D.; Martens, M.

    2010-01-01

    BACKGROUND: We present a handheld integrated device based on a novel magnetic-optical technology for the sensitive detection of cardiactroponin I, a biomarker for the positive diagnosis of myocardial infarct, in a finger-prick blood sample. The test can be performed with a turn-around time of 5 min

  13. Insecticide species sensitivity distributions: importance of test species selection and relevance to aquatic ecosystems

    NARCIS (Netherlands)

    Maltby, L.; Blake, N.; Brock, T.C.M.; Brink, van den P.J.

    2005-01-01

    Single-species acute toxicity data and (micro)mesocosm data were collated for 16 insecticides. These data were used to investigate the importance of test-species selection in constructing species sensitivity distributions (SSDs) and the ability of estimated hazardous concentrations (HCs) to protect

  14. Effect of training intensity on insulin sensitivity evaluated by insulin tolerance test

    NARCIS (Netherlands)

    K. Backx; H.A. Keizer; M.F. Mensink; dr. Lars B. Borghouts

    1999-01-01

    This research article shows that a high intensity exercise program compared to a low intensity exercise program of the same session duration and frequency, increases insulin sensitivity to a larger extend in healthy subjects. It also shows that the short insulin tolerance test can be used to detect

  15. Evaluation of Five Tests for Sensitivity to Functional Deficits following Cervical or Thoracic Dorsal Column Transection in the Rat.

    Directory of Open Access Journals (Sweden)

    Nitish D Fagoe

    Full Text Available The dorsal column lesion model of spinal cord injury targets sensory fibres which originate from the dorsal root ganglia and ascend in the dorsal funiculus. It has the advantages that fibres can be specifically traced from the sciatic nerve, verifiably complete lesions can be performed of the labelled fibres, and it can be used to study sprouting in the central nervous system from the conditioning lesion effect. However, functional deficits from this type of lesion are mild, making assessment of experimental treatment-induced functional recovery difficult. Here, five functional tests were compared for their sensitivity to functional deficits, and hence their suitability to reliably measure recovery of function after dorsal column injury. We assessed the tape removal test, the rope crossing test, CatWalk gait analysis, and the horizontal ladder, and introduce a new test, the inclined rolling ladder. Animals with dorsal column injuries at C4 or T7 level were compared to sham-operated animals for a duration of eight weeks. As well as comparing groups at individual timepoints we also compared the longitudinal data over the whole time course with linear mixed models (LMMs, and for tests where steps are scored as success/error, using generalized LMMs for binomial data. Although, generally, function recovered to sham levels within 2-6 weeks, in most tests we were able to detect significant deficits with whole time-course comparisons. On the horizontal ladder deficits were detected until 5-6 weeks. With the new inclined rolling ladder functional deficits were somewhat more consistent over the testing period and appeared to last for 6-7 weeks. Of the CatWalk parameters base of support was sensitive to cervical and thoracic lesions while hind-paw print-width was affected by cervical lesion only. The inclined rolling ladder test in combination with the horizontal ladder and the CatWalk may prove useful to monitor functional recovery after experimental

  16. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  17. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  18. A Bayesian model of context-sensitive value attribution.

    Science.gov (United States)

    Rigoli, Francesco; Friston, Karl J; Martinelli, Cristina; Selaković, Mirjana; Shergill, Sukhwinder S; Dolan, Raymond J

    2016-06-22

    Substantial evidence indicates that incentive value depends on an anticipation of rewards within a given context. However, the computations underlying this context sensitivity remain unknown. To address this question, we introduce a normative (Bayesian) account of how rewards map to incentive values. This assumes that the brain inverts a model of how rewards are generated. Key features of our account include (i) an influence of prior beliefs about the context in which rewards are delivered (weighted by their reliability in a Bayes-optimal fashion), (ii) the notion that incentive values correspond to precision-weighted prediction errors, (iii) and contextual information unfolding at different hierarchical levels. This formulation implies that incentive value is intrinsically context-dependent. We provide empirical support for this model by showing that incentive value is influenced by context variability and by hierarchically nested contexts. The perspective we introduce generates new empirical predictions that might help explaining psychopathologies, such as addiction.

  19. Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions

    CERN Document Server

    Bab, Sebastian; 10.4204/EPTCS.83.2

    2012-01-01

    The concept of dynamic coalitions (also virtual organizations) describes the temporary interconnection of autonomous agents, who share information or resources in order to achieve a common goal. Through modern technologies these coalitions may form across company, organization and system borders. Therefor questions of access control and security are of vital significance for the architectures supporting these coalitions. In this paper, we present our first steps to reach a formal framework for modeling and verifying the design of privacy-sensitive dynamic coalition infrastructures and their processes. In order to do so we extend existing dynamic coalition modeling approaches with an access-control-concept, which manages access to information through policies. Furthermore we regard the processes underlying these coalitions and present first works in formalizing these processes. As a result of the present paper we illustrate the usefulness of the Abstract State Machine (ASM) method for this task. We demonstrate...

  20. Modelling flow through unsaturated zones: Sensitivity to unsaturated soil properties

    Indian Academy of Sciences (India)

    K S Hari Prasad; M S Mohan Kumar; M Sekhar

    2001-12-01

    A numerical model to simulate moisture flow through unsaturated zones is developed using the finite element method, and is validated by comparing the model results with those available in the literature. The sensitivities of different processes such as gravity drainage and infiltration to the variations in the unsaturated soil properties are studied by varying the unsaturated parameters and over a wide range. The model is also applied to predict moisture contents during a field internal drainage test.

  1. Standardized Tests and Froebel's Original Kindergarten Model

    Science.gov (United States)

    Jeynes, William H.

    2006-01-01

    The author argues that American educators rely on standardized tests at too early an age when administered in kindergarten, particularly given the original intent of kindergarten as envisioned by its founder, Friedrich Froebel. The author examines the current use of standardized tests in kindergarten and the Froebel model, including his emphasis…

  2. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...

  3. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  4. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  6. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    Science.gov (United States)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  7. Identification of precision treatment strategies for relapsed/refractory multiple myeloma by functional drug sensitivity testing.

    Science.gov (United States)

    Majumder, Muntasir Mamun; Silvennoinen, Raija; Anttila, Pekka; Tamborero, David; Eldfors, Samuli; Yadav, Bhagwan; Karjalainen, Riikka; Kuusanmäki, Heikki; Lievonen, Juha; Parsons, Alun; Suvela, Minna; Jantunen, Esa; Porkka, Kimmo; Heckman, Caroline A

    2017-08-22

    Novel agents have increased survival of multiple myeloma (MM) patients, however high-risk and relapsed/refractory patients remain challenging to treat and their outcome is poor. To identify novel therapies and aid treatment selection for MM, we assessed the ex vivo sensitivity of 50 MM patient samples to 308 approved and investigational drugs. With the results we i) classified patients based on their ex vivo drug response profile; ii) identified and matched potential drug candidates to recurrent cytogenetic alterations; and iii) correlated ex vivo drug sensitivity to patient outcome. Based on their drug sensitivity profiles, MM patients were stratified into four distinct subgroups with varied survival outcomes. Patients with progressive disease and poor survival clustered in a drug response group exhibiting high sensitivity to signal transduction inhibitors. Del(17p) positive samples were resistant to most drugs tested with the exception of histone deacetylase and BCL2 inhibitors. Samples positive for t(4;14) were highly sensitive to immunomodulatory drugs, proteasome inhibitors and several targeted drugs. Three patients treated based on the ex vivo results showed good response to the selected treatments. Our results demonstrate that ex vivo drug testing may potentially be applied to optimize treatment selection and achieve therapeutic benefit for relapsed/refractory MM.

  8. Model to Test Electric Field Comparisons in a Composite Fairing Cavity

    Science.gov (United States)

    Trout, Dawn H.; Burford, Janessa

    2013-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.

  9. Short ensembles: an efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Directory of Open Access Journals (Sweden)

    H. Wan

    2014-09-01

    Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of

  10. Short ensembles: an efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Science.gov (United States)

    Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.

    2014-09-01

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high

  11. Modeling cross-hole slug tests in an unconfined aquifer

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; Brauchler, Ralf; Bayer, Peter

    2016-09-01

    A modified version of a published slug test model for unconfined aquifers is applied to cross-hole slug test data collected in field tests conducted at the Widen site in Switzerland. The model accounts for water-table effects using the linearized kinematic condition. The model also accounts for inertial effects in source and observation wells. The primary objective of this work is to demonstrate applicability of this semi-analytical model to multi-well and multi-level pneumatic slug tests. The pneumatic perturbation was applied at discrete intervals in a source well and monitored at discrete vertical intervals in observation wells. The source and observation well pairs were separated by distances of up to 4 m. The analysis yielded vertical profiles of hydraulic conductivity, specific storage, and specific yield at observation well locations. The hydraulic parameter estimates are compared to results from prior pumping and single-well slug tests conducted at the site, as well as to estimates from particle size analyses of sediment collected from boreholes during well installation. The results are in general agreement with results from prior tests and are indicative of a sand and gravel aquifer. Sensitivity analysis show that model identification of specific yield is strongest at late-time. However, the usefulness of late-time data is limited due to the low signal-to-noise ratios.

  12. Improvement of reflood model in RELAP5 code based on sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail: yanhuay@sjtu.edu.cn

    2016-07-15

    Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.

  13. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  14. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-06-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict future change because their computational efficiency allows for continuous grounding line tracking, high horizontal resolution, and a physically-based calving criterion, which are all essential to realistic modeling of tidewater glaciers. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e. steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal overdeepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ∼ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by sensitivity tests that take parameter uncertainty

  15. Effect of incubation temperature on the diagnostic sensitivity of the glanders complement fixation test.

    Science.gov (United States)

    Khan, I; Wieler, L H; Saqib, M; Melzer, F; Santana, V L D A; Neubauer, H; Elschner, M C

    2014-12-01

    The complement fixation test (CFT) is the only serological test prescribed by the World Organisation for Animal Health (OIE) for the diagnosis of glanders in international trading of equids. However, false-positive reactions have caused financial losses to the animal owners in the past, and false-negative tests have resulted in the introduction of glanders into healthy equine populations in previously glanders-free areas. Both warm (incubation at 37°C for 1 h) and cold (overnight incubation at 4°C) procedures are recommended by the OIE for serodiagnosis of glanders. In a comparison of the sensitivity and specificity of the two techniques, using the United States Department of Agriculture antigen, warm CFT was found to be significantly less sensitive (56.8%; p glanders but a lower diagnostic specificity has to be accepted. The immunoblot was used as the gold standard.

  16. ESTIMATION OF HIGHLY SENSITIVE TROPONIN TESTS IN THE DIAGNOSIS OF ACUTE CORONARY SYNDROME

    Directory of Open Access Journals (Sweden)

    L. V. Kremneva

    2016-01-01

    Full Text Available Review is devoted to the value of the use of highly sensitive troponin (hs-cTn tests in the diagnosis of acute coronary syndrome. The classification of the Tn-tests depending on their sensitivity is presented. The possible reasons of troponins appearance in blood of healthy people are shown. Authors consider a 3-hour algorithm for myocardial infarction (MI diagnostic, recommended by the expert group in 2012. Study results of 2011-2015 years are presented as the basis for the development of a one-hour MI diagnostic algorithm, recommended by the European Society of Cardiology in 2015. Authors discuss the results of studies showing that modern HS-cTnt tests (together with ECG assessment are capable to diagnose MI in the early stages. They significantly increase the number of identified MI, especially MI without ST segment elevation, as well as identify the group of patients with subsequent favorable prognosis.

  17. The sensitizing potential of metalworking fluid biocides (phenolic and thiazole compounds) in the guinea-pig maximization test in relation to patch-test reactivity in eczema patients

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, K.E.; Hamann, K.

    1984-08-01

    The sensitizing potential of seven industrial antimicrobial agents was evaluated using the guinea-pig maximization test. Preventol O extra (o-phenylphenol) did not produce a sensitization reaction. Preventol ON extra (sodium salt of o-phenylphenol), Preventol GD (dichlorophene) and Proxel XL and HL containing 1,2-benzisothiazolin-3-one were weak sensitizers, while Preventol CMK and Preventol L, both containing chlorocresol, were classified as extreme potential sensitizers. Both the weak and the extreme experimental sensitizers are occasional human sensitizers. The interpretation of the test results is discussed.

  18. The sensitizing potential of metalworking fluid biocides (phenolic and thiazole compounds) in the guinea-pig maximization test in relation to patch-test reactivity in eczema patients

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner; Hamann, K

    1984-01-01

    containing 1,2-benzisothiazolin-3-one were weak sensitizers, while Preventol CMK and Preventol L, both containing chlorocresol, were classified as extreme potential sensitizers. Both the weak and the extreme experimental sensitizers are occasional human sensitizers. The interpretation of the test results......The sensitizing potential of seven industrial antimicrobial agents was evaluated using the guinea-pig maximization test. Preventol O extra (o-phenylphenol) did not produce a sensitization reaction. Preventol ON extra (sodium salt of o-phenylphenol), Preventol GD (dichlorophene) and Proxel XL and HL...

  19. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  20. Testing inequality constrained hypotheses in SEM Models

    NARCIS (Netherlands)

    Van de Schoot, R.; Hoijtink, H.J.A.; Dekovic, M.

    2010-01-01

    Researchers often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model. It is currently not possible to test these so-called informative hypotheses in structural equation modeling software. We offer a solution to this probl

  1. Modeling Answer Changes on Test Items

    Science.gov (United States)

    van der Linden, Wim J.; Jeon, Minjeong

    2012-01-01

    The probability of test takers changing answers upon review of their initial choices is modeled. The primary purpose of the model is to check erasures on answer sheets recorded by an optical scanner for numbers and patterns that may be indicative of irregular behavior, such as teachers or school administrators changing answer sheets after their…

  2. Modeling Nonignorable Missing Data in Speeded Tests

    Science.gov (United States)

    Glas, Cees A. W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IRT) models: one for the observed response data and…

  3. Improved testing inference in mixed linear models

    CERN Document Server

    Melo, Tatiane F N; Cribari-Neto, Francisco; 10.1016/j.csda.2008.12.007

    2011-01-01

    Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Oftentimes, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test and also to a test obtained from a modified profile likelihood function. Our results generalize those in Zucker et al. (Journal of the Royal Statistical Society B, 2000, 62, 827-838) by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report numerical evidence which shows that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presente...

  4. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    Energy Technology Data Exchange (ETDEWEB)

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  5. Testing Geyser Models using Down-vent Data

    Science.gov (United States)

    Wang, C.; Munoz, C.; Ingebritsen, S.; King, E.

    2013-12-01

    Geysers are often studied as an analogue to magmatic volcanoes because both involve the transfer of mass and energy that leads to eruption. Several conceptual models have been proposed to explain geyser eruption, but no definitive test has been performed largely due to scarcity of down-vent data. In this study we compare simulated time histories of pressure and temperature against published data for the Old Faithful geyser in the Yellowstone National Park and new down-vent measurements from geysers in the El Tatio geyser field of northern Chile. We test two major types of geyser models by comparing simulated and field results. In the chamber model, the geyser system is approximated as a fissure-like conduit connected to a subsurface chamber of water and steam. Heat supplied to the chamber causes water to boil and drives geyser eruptions. Here the Navier-Stokes equation is used to simulate the flow of water and steam. In the fracture-zone model, the geyser system is approximated as a saturated fracture zone of high permeability and compressibility, surrounded by rock matrix of relatively low permeability and compressibility. Heat supply from below causes pore water to boil and drives geyser eruption. Here a two-phase form of Darcy's law is assumed to describe the flow of water and steam (Ingebritsen and Rojstaczer, 1993). Both models can produce P-T time histories qualitatively similar to field results, but the simulations are sensitive to assumed parameters. Results from the chamber model are sensitive to the heat supplied to the system and to the width of the conduit, while results from the fracture-zone model are most sensitive to the permeability of the fracture zone and the adjacent wall rocks. Detailed comparison between field and simulated results, such as the phase lag between changes of pressure and temperature, may help to resolve which model might be more realistic.

  6. Test characteristics from latent-class models of the California Mastitis Test.

    Science.gov (United States)

    Sanford, C J; Keefe, G P; Sanchez, J; Dingwell, R T; Barkema, H W; Leslie, K E; Dohoo, I R

    2006-11-17

    We evaluated (using latent-class models) the ability of the California Mastitis Test (CMT) to identify cows with intramammary infections on the day of dry-off. The positive and negative predictive values of this test to identify cows requiring dry-cow antibiotics (i.e. infected) was also assessed. We used 752 Holstein-Friesian cows from 11 herds for this investigation. Milk samples were collected for bacteriology, and the CMT was performed cow-side, prior to milking on the day of dry-off. At the cow-level, the sensitivity and specificity of the CMT (using the four quarter results interpreted in parallel) for identifying all pathogens were estimated at 70 and 48%, respectively. If only major pathogens were considered the sensitivity of the CMT increased to 86%. The negative predictive value of the CMT was >95% for herds with major-pathogen intramammary-infection prevalence CMT.

  7. Modeling and sensitivity analysis of transport and deposition of radionuclides from the Fukushima Daiichi accident

    Directory of Open Access Journals (Sweden)

    X. Hu

    2014-01-01

    Full Text Available The atmospheric transport and ground deposition of radioactive isotopes 131I and 137Cs during and after the Fukushima Daiichi Nuclear Power Plant (FDNPP accident (March 2011 are investigated using the Weather Research and Forecasting/Chemistry (WRF/Chem model. The aim is to assess the skill of WRF in simulating these processes and the sensitivity of the model's performance to various parameterizations of unresolved physics. The WRF/Chem model is first upgraded by implementing a radioactive decay term into the advection-diffusion solver and adding three parameterizations for dry deposition and two parameterizations for wet deposition. Different microphysics and horizontal turbulent diffusion schemes are then tested for their ability to reproduce observed meteorological conditions. Subsequently, the influence on the simulated transport and deposition of the characteristics of the emission source, including the emission rate, the gas partitioning of 131I and the size distribution of 137Cs, is examined. The results show that the model can predict the wind fields and rainfall realistically. The ground deposition of the radionuclides can also potentially be captured well but it is very sensitive to the emission characterization. It is found that the total deposition is most influenced by the emission rate for both 131I and 137Cs; while it is less sensitive to the dry deposition parameterizations. Moreover, for 131I, the deposition is also sensitive to the microphysics schemes, the horizontal diffusion schemes, gas partitioning and wet deposition parameterizations; while for 137Cs, the deposition is very sensitive to the microphysics schemes and wet deposition parameterizations, and it is also sensitive to the horizontal diffusion schemes and the size distribution.

  8. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA.

    Directory of Open Access Journals (Sweden)

    Martin T Schultz

    Full Text Available The environmental DNA (eDNA method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1 collection of a filtered water sample from the source; 2 extraction of DNA from the filter and isolation in a purified elution; 3 removal of aliquots from the elution for use in the polymerase chain reaction (PCR assay; 4 PCR; and 5 genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis and silver carp (H. molitrix assuming sampling protocols used in the Chicago Area Waterway System (CAWS. Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration

  9. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA).

    Science.gov (United States)

    Schultz, Martin T; Lance, Richard F

    2015-01-01

    The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration

  10. Reliability and sensitivity to change of the timed standing balance test in children with down syndrome

    Directory of Open Access Journals (Sweden)

    Vencita Priyanka Aranha

    2016-01-01

    Full Text Available Objective: To estimate the reliability and sensitivity to change of the timed standing balance test in children with Down syndrome (DS. Methods: It was a nonblinded, comparison study with a convenience sample of subjects consisting of children with DS (n = 9 aged 8–17 years. The main outcome measure was standing balance which was assessed using timed standing balance test, the time required to maintain in four conditions, eyes open static, eyes closed static, eyes open dynamic, and eyes closed dynamic. Results: Relative reliability was excellent for all four conditions with an Interclass Correlation Coefficient (ICC ranging from 0.91 to 0.93. The variation between repeated measurements for each condition was minimal with standard error of measurement (SEM of 0.21–0.59 s, suggestive of excellent absolute reliability. The sensitivity to change as measured by smallest real change (SRC was 1.27 s for eyes open static, 1.63 s for eyes closed static, 0.58 s for eyes open dynamic, and 0.61 s for eyes closed static. Conclusions: Timed standing balance test is an easy to administer test and sensitive to change with strong absolute and relative reliabilities, an important first step in establishing its utility as a clinical balance measure in children with DS.

  11. Computerized motion sensitivity screening tests in a multicountry rural onchocercal community survey in Africa

    Directory of Open Access Journals (Sweden)

    Babalola O

    2010-01-01

    Full Text Available Purpose: To determine whether the Wu-Jones Motion Sensitivity Screening Test (MSST accurately reflects the burden of optic nerve disease in several onchoendemic communities in Africa. Materials and Methods: The MSST was used to evaluate subjects in the communities of Raja in Sudan, Bushenyi in Uganda, Morogoro in Tanzania, and Ikon, Olomboro, and Gembu in Nigeria. Motion sensitivity was expressed as a percentage of motion detected in the individual eye, and this was averaged for the community. A perfectly normal eye would detect all motion and score 100%. Results: In this study, 3858 eyes of 2072 subjects were tested. The test was completed in 76% of respondents. Acceptability was high. Average test time was 120.4 s. The overall mean motion sensitivity of all eyes tested was 88.49%, ±17.49. Using a cutoff level of 50%, 6.4% of all subjects tested were subnormal. The highest proportion of subnormals recorded was in Morogoro at 12.7%. Severe defects in a community best correlated with optic nerve disease prevalence, while the proportion of the defect from a higher cutoff level best correlated with overall ocular morbidity. A repeat examination in the next 5 years following ivermectin treatment will show the influence, if any, on community-wide MSST performance. Conclusion: A wide range in community scores reflected disease diversity. The MSST appears to be a useful test in community-wide screening and diagnosis as it reflects the general level of ocular pathology and specifically, optic nerve disease.

  12. Graded CTL Model Checking for Test Generation

    CERN Document Server

    Napoli, Margherita

    2011-01-01

    Recently there has been a great attention from the scientific community towards the use of the model-checking technique as a tool for test generation in the simulation field. This paper aims to provide a useful mean to get more insights along these lines. By applying recent results in the field of graded temporal logics, we present a new efficient model-checking algorithm for Hierarchical Finite State Machines (HSM), a well established symbolism long and widely used for representing hierarchical models of discrete systems. Performing model-checking against specifications expressed using graded temporal logics has the peculiarity of returning more counterexamples within a unique run. We think that this can greatly improve the efficacy of automatically getting test cases. In particular we verify two different models of HSM against branching time temporal properties.

  13. Sensitivity and validity of psychometric tests for assessing driving impairment: effects of sleep deprivation.

    Directory of Open Access Journals (Sweden)

    Stefan Jongen

    Full Text Available To assess drug induced driving impairment, initial screening is needed. However, no consensus has been reached about which initial screening tools have to be used. The present study aims to determine the ability of a battery of psychometric tests to detect performance impairing effects of clinically relevant levels of drowsiness as induced by one night of sleep deprivation.Twenty four healthy volunteers participated in a 2-period crossover study in which the highway driving test was conducted twice: once after normal sleep and once after one night of sleep deprivation. The psychometric tests were conducted on 4 occasions: once after normal sleep (at 11 am and three times during a single night of sleep deprivation (at 1 am, 5 am, and 11 am.On-the-road driving performance was significantly impaired after sleep deprivation, as measured by an increase in Standard Deviation of Lateral Position (SDLP of 3.1 cm compared to performance after a normal night of sleep. At 5 am, performance in most psychometric tests showed significant impairment. As expected, largest effect sizes were found on performance in the Psychomotor Vigilance Test (PVT. Large effects sizes were also found in the Divided Attention Test (DAT, the Attention Network Test (ANT, and the test for Useful Field of View (UFOV at 5 and 11 am during sleep deprivation. Effects of sleep deprivation on SDLP correlated significantly with performance changes in the PVT and the DAT, but not with performance changes in the UFOV.From the psychometric tests used in this study, the PVT and DAT seem most promising for initial evaluation of drug impairment based on sensitivity and correlations with driving impairment. Further studies are needed to assess the sensitivity and validity of these psychometric tests after benchmark sedative drug use.

  14. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  15. Parameter sensitivity in satellite-gravity-constrained geothermal modelling

    Science.gov (United States)

    Pastorutti, Alberto; Braitenberg, Carla

    2017-04-01

    The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.

  16. Dense Molecular Gas: A Sensitive Probe of Stellar Feedback Models

    CERN Document Server

    Hopkins, Philip F; Murray, Norman; Quataert, Eliot

    2012-01-01

    We show that the mass fraction of GMC gas (n>100 cm^-3) in dense (n>>10^4 cm^-3) star-forming clumps, observable in dense molecular tracers (L_HCN/L_CO(1-0)), is a sensitive probe of the strength and mechanism(s) of stellar feedback. Using high-resolution galaxy-scale simulations with pc-scale resolution and explicit models for feedback from radiation pressure, photoionization heating, stellar winds, and supernovae (SNe), we make predictions for the dense molecular gas tracers as a function of GMC and galaxy properties and the efficiency of stellar feedback. In models with weak/no feedback, much of the mass in GMCs collapses into dense sub-units, predicting L_HCN/L_CO(1-0) ratios order-of-magnitude larger than observed. By contrast, models with feedback properties taken directly from stellar evolution calculations predict dense gas tracers in good agreement with observations. Changing the strength or timing of SNe tends to move systems along, rather than off, the L_HCN-L_CO relation (because SNe heat lower-de...

  17. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through suffi

  18. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  19. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...... and implementing abstractions will improve the applicability of model checking in practice....

  20. Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited

    Science.gov (United States)

    Ghergut, Julia; Behrens, Horst; Sauter, Martin

    2017-04-01

    Single-well injection-withdrawal (SWIW) or 'push-then-pull' tracer methods appear attractive for a number of reasons: less uncertainty on design and dimensioning, and lower tracer quantities required than for inter-well tests; stronger tracer signals, enabling easier and cheaper metering, and shorter metering duration required, reaching higher tracer mass recovery than in inter-well tests; last not least: no need for a second well. However, SWIW tracer signal inversion faces a major issue: the 'push-then-pull' design weakens the correlation between tracer residence times and georeservoir transport parameters, inducing insensitivity or ambiguity of tracer signal inversion w. r. to some of those georeservoir parameters that are supposed to be the target of tracer tests par excellence: pore velocity, transport-effective porosity, fracture or fissure aperture and spacing or density (where applicable), fluid/solid or fluid/fluid phase interface density. Hydraulic methods cannot measure the transport-effective values of such parameters, because pressure signals correlate neither with fluid motion, nor with material fluxes through (fluid-rock, or fluid-fluid) phase interfaces. The notorious ambiguity impeding parameter inversion from SWIW test signals has nourished several 'modeling attitudes': (i) regard dispersion as the key process encompassing whatever superposition of underlying transport phenomena, and seek a statistical description of flow-path collectives enabling to characterize dispersion independently of any other transport parameter, as proposed by Gouze et al. (2008), with Hansen et al. (2016) offering a comprehensive analysis of the various ways dispersion model assumptions interfere with parameter inversion from SWIW tests; (ii) regard diffusion as the key process, and seek for a large-time, asymptotically advection-independent regime in the measured tracer signals (Haggerty et al. 2001), enabling a dispersion-independent characterization of multiple

  1. An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.

    Science.gov (United States)

    Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C

    2013-08-01

    To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.

  2. Variation of the NMVOC speciation in the solvent sector and the sensitivity of modelled tropospheric ozone

    Science.gov (United States)

    von Schneidemesser, E.; Coates, J.; Denier van der Gon, H. A. C.; Visschedijk, A. J. H.; Butler, T. M.

    2016-06-01

    Non-methane volatile organic compounds (NMVOCs) are detrimental to human health owing to the toxicity of many of the NMVOC species, as well as their role in the formation of secondary air pollutants such as tropospheric ozone (O3) and secondary organic aerosol. The speciation and amount of NMVOCs emitted into the troposphere are represented in emission inventories (EIs) for input to chemical transport models that predict air pollutant levels. Much of the information in EIs pertaining to speciation of NMVOCs is likely outdated, but before taking on the task of providing an up-to-date and highly speciated EI, a better understanding of the sensitivity of models to the change in NMVOC input would be highly beneficial. According to the EIs, the solvent sector is the most important sector for NMVOC emissions. Here, the sensitivity of modelled tropospheric O3 to NMVOC emission inventory speciation was investigated by comparing the maximum potential difference in O3 produced using a variety of reported solvent sector EI speciations in an idealized study using a box model. The sensitivity was tested using three chemical mechanisms that describe O3 production chemistry, typically employed for different types of modelling scales - point (MCM v3.2), regional (RADM2), and global (MOZART-4). In the box model simulations, a maximum difference of 15 ppbv (ca. 22% of the mean O3 mixing ratio of 69 ppbv) between the different EI speciations of the solvent sector was calculated. In comparison, for the same EI speciation, but comparing the three different mechanisms, a maximum difference of 6.7 ppbv was observed. Relationships were found between the relative contribution of NMVOC compound classes (alkanes and oxygenated species) in the speciations to the amount of Ox produced in the box model. These results indicate that modelled tropospheric O3 is sensitive to the speciation of NMVOCs as specified by emission inventories, suggesting that detailed updates to the EI speciation

  3. Sensitivity Analysis and Statistical Convergence of a Saltating Particle Model

    CERN Document Server

    Maldonado, S

    2016-01-01

    Saltation models provide considerable insight into near-bed sediment transport. This paper outlines a simple, efficient numerical model of stochastic saltation, which is validated against previously published experimental data on saltation in a channel of nearly horizontal bed. Convergence tests are systematically applied to ensure the model is free from statistical errors emanating from the number of particle hops considered. Two criteria for statistical convergence are derived; according to the first criterion, at least $10^3$ hops appear to be necessary for convergent results, whereas $10^4$ saltations seem to be the minimum required in order to achieve statistical convergence in accordance with the second criterion. Two empirical formulae for lift force are considered: one dependent on the slip (relative) velocity of the particle multiplied by the vertical gradient of the horizontal flow velocity component; the other dependent on the difference between the squares of the slip velocity components at the to...

  4. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  5. Food Sensitivity in Children with Acute Urticaria in Skin Prick Test: Single Center Experience

    Directory of Open Access Journals (Sweden)

    Hatice Eke Gungor

    2015-11-01

    Full Text Available Aim: Families of children with acute urticaria often think that there is food allergy in children with urticaria and insist for skin tests. In this study, it was aimed to determine whether skin prick tests are necessary in cases presented with acute urticaria, in whom other causes of acute urticaria are excluded. Material and Method: A test panel involving cow milk, egg white, wheat, hazelnut, peanut, soybean, walnut, sesame, and tuna fish antigens was applied to the children presented with acute urticaria between 1 August 2013 and 1 August 2014, in whom other causes of acute urticaria were excluded and suspected food allergy was reported by parents. Results: Overall, 574 children aged 1-14 years were included to the study. Of the patients, sensitization against at least one food antigen was detected in 22.3% (128/574 of the patients. This rate was found to be 31.9% among those younger than 3 years, while 19.3% in those older than 3 years. Overall, sensitization rates against food allergen in panel were as follows: egg white, 7.3%; wheat, 3.3%; cow milk, 2.7%,; sesame, 2.8%; hazelnut, 2.4%; soybean, 2.3%; peanut, 1.9%, walnut, 1.6%; tuna fish, 1.6%. In general, the history of patients wasn%u2019t compatible with food sensitization detected. Discussion: Sensitization to food allergens is infrequent in children presented with acute urticaria, particularly among those older than 3 years despite expressions of parent and skin prick tests seems to be unnecessary unless strongly suggestive history is present.

  6. Identification of neutron irradiation induced strain rate sensitivity change using inverse FEM analysis of Charpy test

    Science.gov (United States)

    Haušild, Petr; Materna, Aleš; Kytka, Miloš

    2015-04-01

    A simple methodology how to obtain additional information about the mechanical behaviour of neutron-irradiated WWER 440 reactor pressure vessel steel was developed. Using inverse identification, the instrumented Charpy test data records were compared with the finite element computations in order to estimate the strain rate sensitivity of 15Ch2MFA steel irradiated with different neutron fluences. The results are interpreted in terms of activation volume change.

  7. Clonidine as a sensitizing agent in the forced swimming test for revealing antidepressant activity.

    OpenAIRE

    1991-01-01

    The forced swimming test (FST) in mice has failed to predict antidepressant activity for drugs having beta adrenoreceptor agonist activity and for serotonin uptake inhibitors. We investigated the potential for clonidine to render the FST sensitive to antidepressants by using a behaviorally inactive dose of this agent (0.1 mg/kg). All antidepressants studied (tricyclics, 5-HT uptake inhibitors, iprindole, mianserin, viloxazine, trazodone) showed either activity at lower doses or activity at pr...

  8. Simple Numerical Model to Simulate Penetration Testing in Unsaturated Soils

    Directory of Open Access Journals (Sweden)

    Jarast S. Pegah

    2016-01-01

    Full Text Available Cone penetration test in unsaturated sand is modelled numerically using Finite Element Method. Simple elastic-perfectly plastic Mohr-Coulomb constitutive model is modified with an apparent cohesion to incorporate the effect of suction on cone resistance. The Arbitrary Lagrangian-Eulerian (ALE remeshing algorithm is also implemented to avoid mesh distortion problem due to the large deformation in the soil around the cone tip. The simulated models indicate that the cone resistance was increased consistently under higher suction or lower degree of saturation. Sensitivity analysis investigating the effect of input soil parameters on the cone tip resistance shows that unsaturated soil condition can be adequately modelled by incorporating the apparent cohesion concept. However, updating the soil stiffness by including a suction-dependent effective stress formula in Mohr-Coulomb material model does not influence the cone resistance significantly.

  9. Unit testing, model validation, and biological simulation

    Science.gov (United States)

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  10. A Specification Test of Stochastic Diffusion Models

    Institute of Scientific and Technical Information of China (English)

    Shu-lin ZHANG; Zheng-hong WEI; Qiu-xiang BI

    2013-01-01

    In this paper,we propose a hypothesis testing approach to checking model mis-specification in continuous-time stochastic diffusion model.The key idea behind the development of our test statistic is rooted in the generalized information equality in the context of martingale estimating equations.We propose a bootstrap resampling method to implement numerically the proposed diagnostic procedure.Through intensive simulation studies,we show that our approach is well performed in the aspects of type Ⅰ error control,power improvement as well as computational efficiency.

  11. Testing cosmological models with COBE data

    Energy Technology Data Exchange (ETDEWEB)

    Torres, S. [Observatorio Astronomico, Bogota` (Colombia)]|[Centro Internacional de Fisica, Bogota` (Colombia); Cayon, L. [Lawrence Berkeley Laboratory and Center for Particle Astrophysics, Berkeley (United States); Martinez-Gonzalez, E.; Sanz, J. L. [Santander, Univ. de Cantabria (Spain). Instituto de Fisica. Consejo Superior de Investigaciones Cientificas

    1997-02-01

    The authors test cosmological models with {Omega} < 1 using the COBE two-year cross-correlation function by means of a maximum-likelihood test with Monte Carlo realizations of several {Omega} models. Assuming a Harrison-Zel`dovich primordial power spectrum with amplitude {proportional_to} Q, it is found that there is a large region in the ({Omega}, Q), parameter space that fits the data equally well. They find that the flatness of the universe is not implied by the data. A summary of other analyses of COBE data to constrain the shape of the primordial spectrum is presented.

  12. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  13. Relative sensitivities of toxicity test protocols with the amphipods Eohaustorius estuarius and Ampelisca abdita.

    Science.gov (United States)

    Anderson, Brian S; Lowe, Sarah; Phillips, Bryn M; Hunt, John W; Vorhees, Jennifer; Clark, Sara; Tjeerdema, Ronald S

    2008-01-01

    A series of dose-response experiments was conducted to compare the relative sensitivities of toxicity test protocols using the amphipods Ampelisca abdita and Eohaustorius estuarius. A. abdita is one of the dominant infaunal species in the San Francisco Estuary, and E. estuarius is the primary sediment toxicity species used in the San Francisco Estuary Regional Monitoring Program. Experiments were conducted with a formulated sediment spiked with copper, fluoranthene, chlorpyrifos, and the three pyrethroid pesticides permethrin, bifenthrin, and cypermethrin, all chemicals of concern in this Estuary. The results showed that the protocol with A. abdita was more sensitive to fluoranthene and much more sensitive to copper, while E. estuarius was more sensitive to chlorpyrifos, and much more sensitive to the pyrethroid pesticides. These results, considered in conjunction with those from previous spiking studies [Weston, D.P., 1995. Further development of a chronic Ampelisca abdita bioassay as an indicator of sediment toxicity: summary and conclusions. In: Regional Monitoring Program for Trace Substances Annual Report. San Francisco Estuary Institute, Oakland, CA, pp 108-115; DeWitt, T.E., Swartz, R.C., Lamberson, J.O., 1989. Measuring the acute toxicity of estuarine sediments. Environ. Toxicol. Chem. 8: 1035-1048; DeWitt, T.H.E., Pinza, M.R., Niewolny, L.A., Cullinan, V.I., Gruendell, B.D., 1997. Development and evaluation of a standard marine/estuarine chronic sediment toxicity method using Leptocheirus plumulosus. Draft report prepared for the US Environmental Protection Agency, Office of Science and Technology, Washington DC, under contract DE-AC06-76RLO 1830 by Batelle Marine Science Laboratory, Battelle Memorial Institute, Pacific Northwest Division, Richland, WA], suggest that, in general, A. abdita is more sensitive to metals, E. estuarius is more sensitive to pesticides, and both protocols have roughly comparable sensitivities to hydrocarbons. The preponderance of

  14. Experimental Concepts for Testing Seismic Hazard Models

    Science.gov (United States)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  15. 31P-MRS of skeletal muscle is not a sensitive diagnostic test for mitochondrial myopathy

    DEFF Research Database (Denmark)

    Jeppesen, Tina Dysgaard; Quistorff, Bjørn; Wibrand, Flemming

    2007-01-01

    impaired citrate synthase-corrected complex I activity. Resting PCr/P(i) ratio and leg P(i) recovery were lower in MM patients vs. healthy subjects. PCr and ATP production after exercise were similar in patients and healthy subjects. Although the specificity for MM of some (31)P-MRS variables was as high...... as 100%, the sensitivity was low (0-63%) and the diagnostic strength of (31)P-MRS was inferior to the other diagnostic tests for MM. Thus, (31)P-MRS should not be a routine test for MM, but may be an important research tool....

  16. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  17. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    Energy Technology Data Exchange (ETDEWEB)

    Dumazert, Jonathan, E-mail: jonathan.dumazert@cea.fr [CEA, LIST, Laboratoire Capteurs Architectures Electroniques, 91191 Gif-sur-Yvette (France); Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane [CEA, LIST, Laboratoire Capteurs Architectures Electroniques, 91191 Gif-sur-Yvette (France); Méchin, Laurence [CNRS, UCBN, Groupe de Recherche en Informatique, Image, Automatique et Instrumentation de Caen, 14050 Caen (France)

    2015-09-21

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  18. [Thurstone model application to difference sensory tests].

    Science.gov (United States)

    Angulo, Ofelia; O'Mahony, Michael

    2009-12-01

    Part of understanding why judges perform better on some difference tests than others requires an understanding of how information coming from the mouth to the brain is processed. For some tests it is processed more efficiently than others. This is described by what has been called Thurstonian modeling. This brief review introduces the concepts and ideas involved in Thurstonian modeling as applied to sensory difference measurement. It summarizes the literature concerned with the theorizing and confirmation of Thurstonian models. It introduces the important concept of stimulus variability and the fundamental measure of sensory difference: d'. It indicates how the paradox of discriminatory non-discriminators, which had puzzled researchers for years, can be simply explained using the model. It considers how memory effects and the complex interactions in the mouth can reduce d' by increasing the variance of sensory distributions.

  19. Sensitivity during the forced swim test is a key factor in evaluating the antidepressant effects of abscisic acid in mice.

    Science.gov (United States)

    Qi, Cong-Cong; Shu, Yu-Mian; Chen, Fang-Han; Ding, Yu-Qiang; Zhou, Jiang-Ning

    2016-03-01

    Abscisic acid (ABA), a crucial phytohormone, is distributed in the brains of mammals and has been shown to have antidepressant effects in the chronic unpredictable mild stress test. The forced swim test (FST) is another animal model that can be used to assess antidepressant-like behavior in rodents. Here, we report that the antidepressant effects of ABA are associated with sensitivities to the FST in mice. Based on mean immobility in the 5-min forced swim pre-test, ICR mice were divided into short immobility mice (SIM) and long immobility mice (LIM) substrains. FST was carried out 8 days after drug administration. Learned helplessness, as shown by increased immobility, was only observed in SIM substrain and could be prevented by an 8-day ABA treatment. Our results show that ABA has antidepressant effects in SIM substrain and suggest that mice with learned helplessness might be more suitable for screening potential antidepressant drugs.

  20. Spatial sensitivity analysis of remote sensing snow cover fraction data in a distributed hydrological model

    Science.gov (United States)

    Berezowski, Tomasz; Chormański, Jarosław; Nossent, Jiri; Batelaan, Okke

    2014-05-01

    Distributed hydrological models enhance the analysis and explanation of environmental processes. As more spatial input data and time series become available, more analysis is required of the sensitivity of the data on the simulations. Most research so far focussed on the sensitivity of precipitation data in distributed hydrological models. However, these results can not be compared until a universal approach to quantify the sensitivity of a model to spatial data is available. The frequently tested and used remote sensing data for distributed models is snow cover. Snow cover fraction (SCF) remote sensing products are easily available from the internet, e.g. MODIS snow cover product MOD10A1 (daily snow cover fraction at 500m spatial resolution). In this work a spatial sensitivity analysis (SA) of remotely sensed SCF from MOD10A1 was conducted with the distributed WetSpa model. The aim is to investigate if the WetSpa model is differently subjected to SCF uncertainty in different areas of the model domain. The analysis was extended to look not only at SA quantities but also to relate them to the physical parameters and processes in the study area. The study area is the Biebrza River catchment, Poland, which is considered semi natural catchment and subject to a spring snow melt regime. Hydrological simulations are performed with the distributed WetSpa model, with a simulation period of 2 hydrological years. For the SA the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm is used, with a set of different response functions in regular 4 x 4 km grid. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different landscape features. Moreover, the spatial patterns of the SA results are related to the WetSpa spatial parameters and to different physical processes. Based on the study results, it is clear that spatial approach of SA can be performed with the proposed algorithm and the MOD10A1 SCF is spatially sensitive in

  1. Hypersensitivity testing for Aspergillus fumigatus IgE is significantly more sensitive than testing for Aspergillus niger IgE.

    Science.gov (United States)

    Selvaggi, Thomas A; Walco, Jeremy P; Parikh, Sujal; Walco, Gary A

    2012-02-01

    We sought to determine if sufficient redundancy exists between specific IgE testing for Aspergillus fumigatus and Aspergillus niger to eliminate one of the assays in determining Aspergillus hypersensitivity. We reviewed regional laboratory results comparing A fumigatus-specific IgE with A niger-specific IgE using the Pharmacia UniCAP system (Pharmacia, Kalamazoo, MI). By using the Fisher exact test as an index of concordance among paired results, we showed a significant difference between 109 paired samples for the presence of specific IgE to A fumigatus and A niger (P niger; no specimen was positive for A niger and negative for A fumigatus. We conclude that A fumigatus-specific IgE is sufficient to detect Aspergillus hypersensitivity. The assay for A niger-specific IgE is redundant, less sensitive, and unnecessary if the assay for specific IgE for A fumigatus is performed.

  2. An evaluation of the relative sensitivities of the venereal disease research laboratory test and the Treponema pallidum particle agglutination test among patients diagnosed with primary syphilis.

    Science.gov (United States)

    Creegan, Linda; Bauer, Heidi M; Samuel, Michael C; Klausner, Jeffrey; Liska, Sally; Bolan, Gail

    2007-12-01

    Because definitive methods for diagnosing primary syphilis are limited, it is important to optimize the sensitivity of serodiagnosis. To determine the most sensitive testing approach to the diagnosis of primary syphilis, using the commonly available serologic tests: the Venereal Disease Research Laboratory (VDRL) test and the Treponema pallidum particle agglutination (TP-PA) test. Sensitivities of 2 serologic testing strategies for primary syphilis were compared among 106 darkfield-confirmed cases treated in San Francisco from January 2002 through December 2004. The sensitivity of the diagnostic strategy using VDRL confirmed by TP-PA was 71% (95% CI, 61%-79%). Substituting Rapid Plasma Reagin test for VDRL in a subset of 51 patients produced the same sensitivity (71%; 95% CI, 56%-83%). The sensitivity of TP-PA as the first-line diagnostic test was 86% (95% CI, 78%-92%). The sensitivity of the former approach was significantly lower among HIV-positive patients, compared with HIV-negative patients (55% vs. 77%, P = 0.05). The TP-PA test as the first-line diagnostic test yielded higher sensitivity for primary syphilis than did the use of the currently recommended strategy.

  3. Electroweak tests of the Standard Model

    CERN Document Server

    Erler, Jens

    2012-01-01

    Electroweak precision tests of the Standard Model of the fundamental interactions are reviewed ranging from the lowest to the highest energy experiments. Results from global fits are presented with particular emphasis on the extraction of fundamental parameters such as the Fermi constant, the strong coupling constant, the electroweak mixing angle, and the mass of the Higgs boson. Constraints on physics beyond the Standard Model are also discussed.

  4. Tests of the Electroweak Standard Model

    CERN Document Server

    Erler, Jens

    2012-01-01

    Electroweak precision tests of the Standard Model of the fundamental interactions are reviewed ranging from the lowest to the highest energy experiments. Results from global fits are presented with particular emphasis on the extraction of fundamental parameters such as the Fermi constant, the strong coupling constant, the electroweak mixing angle, and the mass of the Higgs boson. Constraints on physics beyond the Standard Model are also discussed.

  5. Testing mechanistic models of growth in insects

    OpenAIRE

    Maino, James L.; Kearney, Michael R.

    2015-01-01

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compare...

  6. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  7. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  8. Evaluation of 5-fluorouracil applicability by multi-point collagen gel droplet embedded drug sensitivity test.

    Science.gov (United States)

    Ochiai, Takumi; Nishimura, Kazuhiko; Noguchi, Hajime; Kitajima, Masayuki; Tsuruoka, Yuko; Takahashi, Yuka

    2005-07-01

    The drug sensitivity of tumor cells is one of key issues to explore individualized therapy for cancer patients. One of such methods is in vitro anticancer drug sensitivity test which is generally based on one drug concentration and contact time. In this study, 5-fluorouracil (5-FU) sensitivity of cancer cells from colorectal cancer patients was evaluated by collagen gel droplet embedded drug sensitivity test (CD-DST) under multiple drug concentrations and contact durations. Cancer cells from 19 patients were measured for 9 drug concentration/contact time conditions (cohort 1) and from 34 patients were measured for 2 drug concentration/contact time conditions (cohort 2) using CD-DST. There was not significant difference in growth inhibition rate for 1.0 microg/ml for 24 h and 0.2 microg/ml for 120 h, which gives the same area under the curve (AUC) (p=0.832) in all 53 patients (cohort 1 and 2). In cohort 1, 9 conditions were successfully measured in 18 of 19 cohort 1 patients (94.7%). The drug concentrations and growth inhibition rate approximated to logarithmic curve for all 3 contact times and 50% inhibitory concentration (IC50) values at 3 contact times could be calculated in these 18 patients. Growth inhibition rate and AUC also approximated to logarithmic curve. These values varied several orders of magnitude among patients. In vitro antitumor effect of 5-FU depended on AUC in colorectal tumor and it might support the use of continuous infusion or oral therapy which generates significant AUC with manageable toxicity. Some patients demonstrating low 5-FU sensitivity could not be indicated for 5-FU based therapy, and non-5-FU therapy should be explored for them.

  9. Testing the sensitivity and specificity of the fluorescence microscope (Cyscope® for malaria diagnosis

    Directory of Open Access Journals (Sweden)

    Mudathir Mahmoud A

    2010-03-01

    Full Text Available Abstract Background Early diagnosis and treatment of malaria are necessary components in the control of malaria. The gold standard light microscopy technique has high sensitivity, but is a relatively time-consuming procedure especially during epidemics and in areas of high endemicity. This study attempted to test the sensitivity and specificity of a new diagnostic tool - the Cyscope® fluorescence microscope, which is based on the use of Plasmodium nucleic acid-specific fluorescent dyes to facilitate detection of the parasites even in low parasitaemia conditions due to the contrast with the background. Methods In this study, 293 febrile patients above the age of 18 years attending the malaria treatment centre in Sinnar State (Sudan were interviewed using a structured questionnaire. Finger-prick blood samples were also collected from the participants to be tested for malaria using the hospital's microscope, the reference laboratory microscope, as well as the Cyscope® microscope. The results of the investigations were then used to calculate the sensitivity, specificity, and positive and negative predictive values of the Cyscope® microscope in reference to gold standard light microscopy. Results The sensitivity was found to be 98.2% (95% CI: 90.6%-100%; specificity = 98.3% (95% CI: 95.7% - 99.5%; positive predictive value = 93.3% (95% CI: 83.8% - 98.2%; and negative predictive value = 99.6% (95% CI: 97.6% - 100%. Conclusions In conclusion, the Cyscope® microscope was found to be sensitive, specific and provide rapid, reliable results in a matter of less than 10 minutes. The Cyscope® microscope should be considered as a viable, cheaper and time-saving option for malaria diagnosis, especially in areas where Plasmodium falciparum is the predominant parasite.

  10. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  11. Proton Exchange Membrane Fuel Cell Engineering Model Powerplant. Test Report: Benchmark Tests in Three Spatial Orientations

    Science.gov (United States)

    Loyselle, Patricia; Prokopius, Kevin

    2011-01-01

    Proton exchange membrane (PEM) fuel cell technology is the leading candidate to replace the aging alkaline fuel cell technology, currently used on the Shuttle, for future space missions. This test effort marks the final phase of a 5-yr development program that began under the Second Generation Reusable Launch Vehicle (RLV) Program, transitioned into the Next Generation Launch Technologies (NGLT) Program, and continued under Constellation Systems in the Exploration Technology Development Program. Initially, the engineering model (EM) powerplant was evaluated with respect to its performance as compared to acceptance tests carried out at the manufacturer. This was to determine the sensitivity of the powerplant performance to changes in test environment. In addition, a series of tests were performed with the powerplant in the original standard orientation. This report details the continuing EM benchmark test results in three spatial orientations as well as extended duration testing in the mission profile test. The results from these tests verify the applicability of PEM fuel cells for future NASA missions. The specifics of these different tests are described in the following sections.

  12. Timing and presence of an attachment person affect sensitivity of aggression tests in shelter dogs.

    Science.gov (United States)

    Kis, A; Klausz, B; Persa, E; Miklósi, Á; Gácsi, M

    2014-02-22

    Different test series have been developed and used to measure behaviour in shelter dogs in order to reveal individuals not suitable for re-homing due to their aggressive tendencies. However, behavioural tests previously validated on pet dogs seem to have relatively low predictability in the case of shelter dogs. Here, we investigate the potential effects of (1) timing of the behaviour testing and (2) presence of a human companion on dogs' aggressive behaviour. In Study I, shelter dogs (n=25) showed more aggression when tested in a short test series two weeks after they had been placed in the shelter compared to their responses in the same test performed 1-2 days after arrival. In Study II, the occurrence of aggressive behaviour was more probable in pet dogs (n=50) in the presence than in the absence of their passive owner. We conclude that the sensitivity of aggression tests for shelter dogs can be increased by running the test in the presence of a caretaker, and after some period of acclimatisation to the new environment. This methodology could also provide better chances for successful adoption.

  13. Sensitivity Analysis to Select the Most Influential Risk Factors in a Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Jassim N. Hussain

    2008-01-01

    Full Text Available The traditional variable selection methods for survival data depend on iteration procedures, and control of this process assumes tuning parameters that are problematic and time consuming, especially if the models are complex and have a large number of risk factors. In this paper, we propose a new method based on the global sensitivity analysis (GSA to select the most influential risk factors. This contributes to simplification of the logistic regression model by excluding the irrelevant risk factors, thus eliminating the need to fit and evaluate a large number of models. Data from medical trials are suggested as a way to test the efficiency and capability of this method and as a way to simplify the model. This leads to construction of an appropriate model. The proposed method ranks the risk factors according to their importance.

  14. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  15. A Cytoplasmic Form of Gaussia luciferase Provides a Highly Sensitive Test for Cytotoxicity

    Science.gov (United States)

    Tsuji, Saori; Ohbayashi, Tetsuya; Yamakage, Kohji; Oshimura, Mitsuo; Tada, Masako

    2016-01-01

    The elimination of unfavorable chemicals from our environment and commercial products requires a sensitive and high-throughput in vitro assay system for drug-induced hepatotoxicity. Some previous methods for evaluating hepatotoxicity measure the amounts of cytoplasmic enzymes secreted from damaged cells into the peripheral blood or culture medium. However, most of these enzymes are proteolytically digested in the extracellular milieu, dramatically reducing the sensitivity and reliability of such assays. Other methods measure the decrease in cell viability following exposure to a compound, but such endpoint assays are often confounded by proliferation of surviving cells that replace dead or damaged cells. In this study, with the goal of preventing false-negative diagnoses, we developed a sensitive luminometric cytotoxicity test using a stable form of luciferase. Specifically, we converted Gaussia luciferase (G-Luc) from an actively secreted form to a cytoplasmic form by adding an ER-retention signal composed of the four amino acids KDEL. The bioluminescent signal was >30-fold higher in transgenic HepG2 human hepatoblastoma cells expressing G-Luc+KDEL than in cells expressing wild-type G-Luc. Moreover, G-Luc+KDEL secreted from damaged cells was stable in culture medium after 24 hr at 37°C. We evaluated the accuracy of our cytotoxicity test by subjecting identical samples obtained from chemically treated transgenic HepG2 cells to the G-Luc+KDEL assay and luminometric analyses based on secretion of endogenous adenylate kinase or cellular ATP level. Time-dependent accumulation of G-Luc+KDEL in the medium increased the sensitivity of our assay above those of existing tests. Our findings demonstrate that strong and stable luminescence of G-Luc+KDEL in human hepatocyte-like cells, which have high levels of metabolic activity, make it suitable for use in a high-throughput screening system for monitoring time-dependent cytotoxicity in a limited number of cells. PMID

  16. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  17. Modeling nonignorable missing data in speeded tests

    NARCIS (Netherlands)

    Glas, Cees A.W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IR

  18. Quantitative patch and repeated open application testing in methyldibromo glutaronitrile-sensitive patients.

    Science.gov (United States)

    Schnuch, A; Kelterer, D; Bauer, A; Schuster, Ch; Aberer, W; Mahler, V; Katzer, K; Rakoski, J; Jappe, U; Krautheim, A; Bircher, A; Koch, P; Worm, M; Löffler, H; Hillen, U; Frosch, P J; Uter, W

    2005-04-01

    Contact allergy to methyldibromo glutaronitrile (MDBGN), often combined with phenoxyethanol (PE) (e.g., Euxyl K 400), increased throughout the 1990s in Europe. Consequently, in 2003, the European Commission banned its use in leave-on products, where its use concentration was considered too high and the non-sensitizing use concentration as yet unknown. The 2 objectives of the study are (a) to find a maximum non-eliciting concentration in a leave-on product in MDBGN/PE-sensitized patients, which could possibly also be considered safe regarding induction and (b) to find the best patch test concentration for MDBGN. We, therefore, performed a use-related test (ROAT) in patients sensitized to MDBGN/PE (n = 39) with 3 concentrations of MDBGN/PE (50, 100 and 250 p.p.m. MDBGN, respectively). A subset of these patients (n = 24) was later patch-tested with various concentrations (0.1, 0.2, 0.3 and 0.5% MDBGN, respectively). 15 patients (38%, 95% confidence interval (CI) = 23-55%) had a negative and 24 (62%; 95% CI = 45-77%) a positive overall repeated open application test (ROAT) result. 13 reacted to the lowest (50 p.p.m.), 8 to the middle (100 p.p.m.) and 3 to the highest concentration (250 p.p.m.) only. In those 13 reacting to the lowest ROAT concentration, dermatitis developed within a few days (1-7). The strength of the initial and the confirmatory patch test result, respectively, and the outcome of the ROAT were positively associated. Of the 24 patients with a use and confirmatory patch test, 15 reacted to 0.1% MDBGN, 16 to 0.2%, 17 to 0.3% and 22 to 0.5%. With the patch test concentration of 0.5%, the number of ROAT-negative patients but patch-test-positive patients increases considerably, particularly due to + reactions. A maximum sensitivity of 94% (95% CI = 70-100%) is reached with a patch test concentration of 0.2%, and is not further improved by increasing the concentration. However, the specificity decreases dramatically from 88 (95% CI = 47-100%) with 0.2% to a

  19. Mechanism test bed. Flexible body model report

    Science.gov (United States)

    Compton, Jimmy

    1991-01-01

    The Space Station Mechanism Test Bed is a six degree-of-freedom motion simulation facility used to evaluate docking and berthing hardware mechanisms. A generalized rigid body math model was developed which allowed the computation of vehicle relative motion in six DOF due to forces and moments from mechanism contact, attitude control systems, and gravity. No vehicle size limitations were imposed in the model. The equations of motion were based on Hill's equations for translational motion with respect to a nominal circular earth orbit and Newton-Euler equations for rotational motion. This rigid body model and supporting software were being refined.

  20. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    Science.gov (United States)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage

  1. Testing models of tree canopy structure

    Energy Technology Data Exchange (ETDEWEB)

    Martens, S.N. (Los Alamos National Laboratory, NM (United States))

    1994-06-01

    Models of tree canopy structure are difficult to test because of a lack of data which are suitability detailed. Previously, I have made three-dimensional reconstructions of individual trees from measured data. These reconstructions have been used to test assumptions about the dispersion of canopy elements in two- and three-dimensional space. Lacunarity analysis has also been used to describe the texture of the reconstructed canopies. Further tests regarding models of the nature of tree branching structures have been made. Results using probability distribution functions for branching measured from real trees show that branching in Juglans is not Markovian. Specific constraints or rules are necessary to achieve simulations of branching structure which are faithful to the originally measured trees.

  2. Sensitivity Test of 1-D Analysis for MSLB in 3{sup rd} ATLAS Domestic Standard Problem (DSP-03)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J.; Huh, J. S.; Park, Y. S.; Bae, B. U.; Kang, K. H.; Choi, K. Y. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, J. J. [KEPCO E and C, Daejeon (Korea, Republic of); Lee, J. S. [KINS, Daejeon (Korea, Republic of); Son, H. H. [Hanyang University, Seoul (Korea, Republic of); Ha, T. W. [Pusan National University, Busan (Korea, Republic of); Kim, J. I. [DOOSAN Heavy Industry, Hwasung (Korea, Republic of)

    2015-10-15

    This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best-estimate safety analysis codes. As the DSP exercise, 100% Guillotine Break of Steam line without LOOP at zero power condition (- 8%) was determined. In this paper, the activity for sensitivity test of 1-D analysis for SLB transient experiment is described. Six domestic organizations (KEPCO E and C, KINS, Hanyang University, Pusan National University, DOOSAN Heavy Industry, and KAERI) joined and done the 1-D analysis using MARS-KS in an open calculation environment. This group modified the input decks (node modification, combination of models, and etc.) to predict thermal hydraulic phenomena in the ATLAS system. This group also analyzed the sensitivity by modifications to suggest some guide lines for users who makes input deck. Some sensitivity tests of 1-D analysis for SLB transient experiment were done as activity of DSP-03.

  3. Non-animal assessment of skin sensitization hazard: Is an integrated testing strategy needed, and if so what should be integrated?

    Science.gov (United States)

    Roberts, David W; Patlewicz, Grace

    2017-05-24

    There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. A novel hot-plate test sensitive to hyperalgesic stimuli and non-opioid analgesics

    Directory of Open Access Journals (Sweden)

    T.R. Lavich

    2005-03-01

    Full Text Available It is widely accepted that the classical constant-temperature hot-plate test is insensitive to cyclooxygenase inhibitors. In the current study, we developed a variant of the hot-plate test procedure (modified hot-plate (MHP test to measure inflammatory nociception in freely moving rats and mice. Following left and right hind paw stimulation with a phlogogen and vehicle, respectively, the animals were placed individually on a hot-plate surface at 51ºC and the withdrawal latency for each paw was determined simultaneously in measurements performed at 15, 60, 180, and 360 min post-challenge. Plantar stimulation of rats (250 and 500 µg/paw and mice (125-500 µg/paw with carrageenan led to a rapid hyperalgesic response of the ipsilateral paw that reached a plateau from 15 to 360 min after challenge. Pretreatment with indomethacin (4 mg/kg, ip inhibited the phenomenon at all the times analyzed. Similarly, plantar stimulation of rats and mice with prostaglandin E2 (0.5 and 1 µg/paw also resulted in rapid hyperalgesia which was first detected 15 min post-challenge. Finally, we observed that the MHP test was more sensitive than the classical Hargreaves' test, being able to detect about 4- and 10-fold lower doses of prostaglandin E2 and carrageenan, respectively. In conclusion, the MHP test is a simple and sensitive method for detecting peripheral hyperalgesia and analgesia in rats and mice. This test represents a low-cost alternative for the study of inflammatory pain in freely moving animals.

  5. A novel hot-plate test sensitive to hyperalgesic stimuli and non-opioid analgesics.

    Science.gov (United States)

    Lavich, T R; Cordeiro, R S B; Silva, P M R; Martins, M A

    2005-03-01

    It is widely accepted that the classical constant-temperature hot-plate test is insensitive to cyclooxygenase inhibitors. In the current study, we developed a variant of the hot-plate test procedure (modified hot-plate (MHP) test) to measure inflammatory nociception in freely moving rats and mice. Following left and right hind paw stimulation with a phlogogen and vehicle, respectively, the animals were placed individually on a hot-plate surface at 51 degrees C and the withdrawal latency for each paw was determined simultaneously in measurements performed at 15, 60, 180, and 360 min post-challenge. Plantar stimulation of rats (250 and 500 microg/paw) and mice (125-500 microg/paw) with carrageenan led to a rapid hyperalgesic response of the ipsilateral paw that reached a plateau from 15 to 360 min after challenge. Pretreatment with indomethacin (4 mg/kg, i.p.) inhibited the phenomenon at all the times analyzed. Similarly, plantar stimulation of rats and mice with prostaglandin E2 (0.5 and 1 microg/paw) also resulted in rapid hyperalgesia which was first detected 15 min post-challenge. Finally, we observed that the MHP test was more sensitive than the classical Hargreaves' test, being able to detect about 4- and 10-fold lower doses of prostaglandin E2 and carrageenan, respectively. In conclusion, the MHP test is a simple and sensitive method for detecting peripheral hyperalgesia and analgesia in rats and mice. This test represents a low-cost alternative for the study of inflammatory pain in freely moving animals.

  6. Open-circuit sensitivity model based on empirical parameters for a capacitive-type MEMS acoustic sensor

    Science.gov (United States)

    Lee, Jaewoo; Jeon, J. H.; Je, C. H.; Lee, S. Q.; Yang, W. S.; Lee, S.-G.

    2016-03-01

    An empirical-based open-circuit sensitivity model for a capacitive-type MEMS acoustic sensor is presented. To intuitively evaluate the characteristic of the open-circuit sensitivity, the empirical-based model is proposed and analysed by using a lumped spring-mass model and a pad test sample without a parallel plate capacitor for the parasitic capacitance. The model is composed of three different parameter groups: empirical, theoretical, and mixed data. The empirical residual stress from the measured pull-in voltage of 16.7 V and the measured surface topology of the diaphragm were extracted as +13 MPa, resulting in the effective spring constant of 110.9 N/m. The parasitic capacitance for two probing pads including the substrate part was 0.25 pF. Furthermore, to verify the proposed model, the modelled open-circuit sensitivity was compared with the measured value. The MEMS acoustic sensor had an open- circuit sensitivity of -43.0 dBV/Pa at 1 kHz with a bias of 10 V, while the modelled open- circuit sensitivity was -42.9 dBV/Pa, which showed good agreement in the range from 100 Hz to 18 kHz. This validates the empirical-based open-circuit sensitivity model for designing capacitive-type MEMS acoustic sensors.

  7. Testing mechanistic models of growth in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2015-11-22

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes.

  8. Using a variance-based sensitivity analysis for analyzing the relation between measurements and unknown parameters of a physical model

    Science.gov (United States)

    Zhao, J.; Tiede, C.

    2011-05-01

    An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.

  9. Cyclosporine pharmacological efficacy estimated by lymphocyte immunosuppressant sensitivity test before and after renal transplantation.

    Science.gov (United States)

    Sugiyama, K; Isogai, K; Toyama, A; Satoh, H; Saito, K; Nakagawa, Y; Tasaki, M; Takahashi, K; Saito, N; Hirano, T

    2009-10-01

    Lymphocyte immunosuppressant sensitivity test (LIST) is useful for predicting the pharmacological efficacy of immunosuppressive agents. In this study, the pharmacological efficacy of cyclosporine was estimated by LIST before and after renal transplantation. Lymphocyte immunosuppressant sensitivity test was performed by the 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide (MTT) assay before and at 1, 3, and 12 months after transplantation in 19 consecutive renal transplant recipients. There was wide intersubject variability in cyclosporine IC50 before transplantation [Mean (SD) of 593.9 (1067.6) ng/mL]. This variability worsened 1 month after transplantation [525.7 (1532.7) ng/mL] but decreased at 3 months (193.5 (347.9) ng/mL) and 12 months (75.4 (95.4) ng/mL). In this small study, observed differences in IC50 values for the individual subjects at various time intervals was not associated with the occurrence of rejection, graft loss, and infection episodes. Lymphocyte sensitivity to cyclosporine assessed by the LIST assay showed a high level of inter-subject variability particularly before and 1 month after transplantation. The observed difference in IC50 values was not associated with clinical outcome in this small study.

  10. Strain rate sensitivity of the tensile strength of two silicon carbides: experimental evidence and micromechanical modelling

    Science.gov (United States)

    Zinszner, Jean-Luc; Erzar, Benjamin; Forquin, Pascal

    2017-01-01

    Ceramic materials are commonly used to design multi-layer armour systems thanks to their favourable physical and mechanical properties. However, during an impact event, fragmentation of the ceramic plate inevitably occurs due to its inherent brittleness under tensile loading. Consequently, an accurate model of the fragmentation process is necessary in order to achieve an optimum design for a desired armour configuration. In this work, shockless spalling tests have been performed on two silicon carbide grades at strain rates ranging from 103 to 104 s-1 using a high-pulsed power generator. These spalling tests characterize the tensile strength strain rate sensitivity of each ceramic grade. The microstructural properties of the ceramics appear to play an important role on the strain rate sensitivity and on the dynamic tensile strength. Moreover, this experimental configuration allows for recovering damaged, but unbroken specimens, giving unique insight on the fragmentation process initiated in the ceramics. All the collected data have been compared with corresponding results of numerical simulations performed using the Denoual-Forquin-Hild anisotropic damage model. Good agreement is observed between numerical simulations and experimental data in terms of free surface velocity, size and location of the damaged zones along with crack density in these damaged zones. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  11. Interpretation of test data with dynamic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Biba, P. [Southern California Edison, San Clemente, CA (United States). San Onofre Nuclear Generating Station

    1999-11-01

    The in-service testing of many Important-to-safety components, such as valves, pumps, etc. is often performed while the plant is either shut-down or the particular system is in a test mode. Thus the test conditions may be different from the actual operating conditions under which the components would be required to operate. In addition, the components must function under various postulated accident scenarios, which can not be duplicated during plant normal operation. This paper deals with the method of interpretation of the test data by a dynamic model, which allows the evaluation of the many factors affecting the system performance, in order to assure component and system operability.

  12. Sensitivity of field tests, serological and molecular techniques for Plum Pox Virus detection in various tissues

    Directory of Open Access Journals (Sweden)

    Mojca VIRŠČEK MARN

    2015-12-01

    Full Text Available Sensitivity of field tests (AgriStrip  and Immunochromato, DAS-ELISA, two step RT-PCR and real-time RT-PCR for Plum pox virus (PPV detection was tested in various tissues of apricot, peach, plum and damson plum trees infected with isolates belonging to PPV-D, PPV-M or PPV-Rec, the three strains present in Slovenia. Flowers of apricot and plum in full bloom proved to be a very good source for detection of PPV. PPV could be detected with all tested techniques in symptomatic parts of leaves in May and with one exception even in the beginning of August, but it was not detected in asymptomatic leaves using field tests, DAS-ELISA and partly also molecular techniques. PPV was detected only in some of the samples of asymptomatic parts of the leaves with symptoms and of stalks by field tests and DAS-ELISA. Infections were not detected in buds in August using field tests or DAS-ELISA. Field tests are useful for confirmation of the PPV infection in symptomatic leaves, but in tissues without symptoms DAS-ELISA should be combined or replaced by molecular techniques.

  13. The wrist hyperflexion and abduction of the thumb (WHAT) test: a more specific and sensitive test to diagnose de Quervain tenosynovitis than the Eichhoff's Test.

    Science.gov (United States)

    Goubau, J F; Goubau, L; Van Tongel, A; Van Hoonacker, P; Kerckhove, D; Berghs, B

    2014-03-01

    De Quervain's disease has different clinical features. Different tests have been described in the past, the most popular test being the Eichhoff's test, often wrongly named as the Finkelstein's test. Over the years, a misinterpretation has occurred between these two tests, the latter being confused with the first. To compare the Eichhoff's test with a new test, the wrist hyperflexion and abduction of the thumb test, we set up a prospective study over a period of three years for a cohort of 100 patients (88 women, 12 men) presenting spontaneous pain over the radial side of the styloid of the radius (de Quervain tendinopathy). The purpose of the study was to compare the accuracy of the Eichhoff's test and wrist hyperflexion and abduction of the thumb test to diagnose correctly de Quervain's disease by comparing clinical findings using those tests with the results on ultrasound. The wrist hyperflexion and abduction of the thumb test revealed greater sensitivity (0.99) and an improved specificity (0.29) together with a slightly better positive predictive value (0.95) and an improved negative predictive value (0.67). Moreover, the study showed us that the wrist hyperflexion and abduction of the thumb test is very valuable in diagnosing dynamic instability after successful decompression of the first extensor compartment. Our results support that the wrist hyperflexion and abduction of the thumb test is a more precise tool for the diagnosis of de Quervain's disease than the Eichhoff's test and thus could be adopted to guide clinical diagnosis in the early stages of de Quervain's tendinopathy.

  14. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  15. Laboratory measurements and model sensitivity studies of dust deposition ice nucleation

    Directory of Open Access Journals (Sweden)

    G. Kulkarni

    2012-08-01

    Full Text Available We investigated the ice nucleating properties of mineral dust particles to understand the sensitivity of simulated cloud properties to two different representations of contact angle in the Classical Nucleation Theory (CNT. These contact angle representations are based on two sets of laboratory deposition ice nucleation measurements: Arizona Test Dust (ATD particles of 100, 300 and 500 nm sizes were tested at three different temperatures (−25, −30 and −35 °C, and 400 nm ATD and kaolinite dust species were tested at two different temperatures (−30 and −35 °C. These measurements were used to derive the onset relative humidity with respect to ice (RHice required to activate 1% of dust particles as ice nuclei, from which the onset single contact angles were then calculated based on CNT. For the probability density function (PDF representation, parameters of the log-normal contact angle distribution were determined by fitting CNT-predicted activated fraction to the measurements at different RHice. Results show that onset single contact angles vary from ~18 to 24 degrees, while the PDF parameters are sensitive to the measurement conditions (i.e. temperature and dust size. Cloud modeling simulations were performed to understand the sensitivity of cloud properties (i.e. ice number concentration, ice water content, and cloud initiation times to the representation of contact angle and PDF distribution parameters. The model simulations show that cloud properties are sensitive to onset single contact angles and PDF distribution parameters. The comparison of our experimental results with other studies shows that under similar measurement conditions the onset single contact angles are consistent within ±2.0 degrees, while our derived PDF parameters have larger discrepancies.

  16. GA(2)LEN skin test study II: clinical relevance of inhalant allergen sensitizations in Europe

    DEFF Research Database (Denmark)

    Burbach, G J; Heinzerling, L M; Edenharter, G

    2009-01-01

    BACKGROUND: Skin prick testing is the standard for diagnosing IgE-mediated allergies. A positive skin prick reaction, however, does not always correlate with clinical symptoms. A large database from a Global Asthma and Allergy European Network (GA(2)LEN) study with data on clinical relevance...... was used to determine the clinical relevance of sensitizations against the 18 most frequent inhalant allergens in Europe. The study population consisted of patients referred to one of the 17 allergy centres in 14 European countries (n = 3034, median age = 33 years). The aim of the study was to assess...... the clinical relevance of positive skin prick test reactions against inhalant allergens considering the predominating type of symptoms in a pan-European population of patients presenting with suspected allergic disease. METHODS: Clinical relevance of skin prick tests was recorded with regard to patient history...

  17. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e. m(

  18. Feedbacks, climate sensitivity, and the limits of linear models

    Science.gov (United States)

    Rugenstein, M.; Knutti, R.

    2015-12-01

    The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.

  19. Clinical exchange: one model to achieve culturally sensitive care.

    Science.gov (United States)

    Scholes, J; Moore, D

    2000-03-01

    This paper reports on a clinical exchange programme that formed part of a pre-registration European nursing degree run by three collaborating institutions in England, Holland and Spain. The course included: common and shared learning including two summer schools; and the development of a second language before the students went on a three-month clinical placement in one of the other base institutions' clinical environments. The aim of the course was to enable students to become culturally sensitive carers. This was achieved by developing a programme based on transcultural nursing principles in theory and practice. Data were gathered by interview, focus groups, and questionnaires from 79 exchange students, fostering the strategies of illuminative evaluation. The paper examines: how the aims of the course were met; the factors that inhibited the attainment of certain goals; and how the acquisition of a second language influenced the students' learning about nursing. A model is presented to illustrate the process of transformative learning from the exchange experience.

  20. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  1. Parametric Testing of Launch Vehicle FDDR Models

    Science.gov (United States)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  2. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  3. Sex and smoking sensitive model of radon induced lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhukovsky, M.; Yarmoshenko, I. [Institute of Industrial Ecology of Ural Branch of Russian Academy of Sciences, Yekaterinburg (Russian Federation)

    2006-07-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  4. The Standard-Model Extension and Gravitational Tests

    CERN Document Server

    Tasson, Jay D

    2016-01-01

    The Standard-Model Extension (SME) provides a comprehensive effective field-theory framework for the study of CPT and Lorentz symmetry. This work reviews the structure and philosophy of the SME and provides some intuitive examples of symmetry violation. The results of recent gravitational tests performed within the SME are summarized including analysis of results from the Laser Interferometer Gravitational-Wave Observatory (LIGO), sensitivities achieved in short-range gravity experiments, constraints from cosmic-ray data, and results achieved by studying planetary ephemerids. Some proposals and ongoing efforts will also be considered including gravimeter tests, tests of the Weak Equivalence Principle, and antimatter experiments. Our review of the above topics is augmented by several original extensions of the relevant work. We present new examples of symmetry violation in the SME and use the cosmic-ray analysis to place first-ever constraints on 81 additional operators.

  5. The Standard-Model Extension and Gravitational Tests

    Directory of Open Access Journals (Sweden)

    Jay D. Tasson

    2016-10-01

    Full Text Available The Standard-Model Extension (SME provides a comprehensive effective field-theory framework for the study of CPT and Lorentz symmetry. This work reviews the structure and philosophy of the SME and provides some intuitive examples of symmetry violation. The results of recent gravitational tests performed within the SME are summarized including analysis of results from the Laser Interferometer Gravitational-Wave Observatory (LIGO, sensitivities achieved in short-range gravity experiments, constraints from cosmic-ray data, and results achieved by studying planetary ephemerids. Some proposals and ongoing efforts will also be considered including gravimeter tests, tests of the Weak Equivalence Principle, and antimatter experiments. Our review of the above topics is augmented by several original extensions of the relevant work. We present new examples of symmetry violation in the SME and use the cosmic-ray analysis to place first-ever constraints on 81 additional operators.

  6. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    Science.gov (United States)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can

  7. A Coupled THMC model of FEBEX mock-up test

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Liange; Samper, Javier

    2008-09-15

    FEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project for the engineered barrier system (EBS) of a radioactive waste repository in granite. It includes two full-scale heating and hydration tests: the in situ test performed at Grimsel (Switzerland) and a mock-up test operating at CIEMAT facilities in Madrid (Spain). The mock-up test provides valuable insight on thermal, hydrodynamic, mechanical and chemical (THMC) behavior of EBS because its hydration is controlled better than that of in situ test in which the buffer is saturated with water from the surrounding granitic rock. Here we present a coupled THMC model of the mock-up test which accounts for thermal and chemical osmosis and bentonite swelling with a state-surface approach. The THMC model reproduces measured temperature and cumulative water inflow data. It fits also relative humidity data at the outer part of the buffer, but underestimates relative humidities near the heater. Dilution due to hydration and evaporation near the heater are the main processes controlling the concentration of conservative species while surface complexation, mineral dissolution/precipitation and cation exchanges affect significantly reactive species as well. Results of sensitivity analyses to chemical processes show that pH is mostly controlled by surface complexation while dissolved cations concentrations are controlled by cation exchange reactions.

  8. Evaluating a Skin Sensitization Model and Examining Common Assumptions of Skin Sensitizers (QSAR conference)

    Science.gov (United States)

    Skin sensitization is an adverse outcome that has been well studied over many decades. Knowledge of the mechanism of action was recently summarized using the Adverse Outcome Pathway (AOP) framework as part of the OECD work programme (OECD, 2012). Currently there is a strong focus...

  9. Evaluating a Skin Sensitization Model and Examining Common Assumptions of Skin Sensitizers (ASCCT meeting)

    Science.gov (United States)

    Skin sensitization is an adverse outcome that has been well studied over many decades. It was summarized using the adverse outcome pathway (AOP) framework as part of the OECD work programme (OECD, 2012). Currently there is a strong focus on how AOPs can be applied for different r...

  10. Numerical daemons in hydrological modeling: Effects on uncertainty assessment, sensitivity analysis and model predictions

    Science.gov (United States)

    Kavetski, D.; Clark, M. P.; Fenicia, F.

    2011-12-01

    Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated

  11. Resazurin metabolism assay is a new sensitive alternative test in isolated pig cornea.

    Science.gov (United States)

    Perrot, Sébastien; Dutertre-Catella, Hélène; Martin, Chantal; Rat, Patrice; Warnet, Jean-Michel

    2003-03-01

    The main object of our study was to investigate whether the resazurin metabolism assay is a sensitive surfactant and alcohol toxicity test in isolated pig cornea and to compare this recently developed fluorometric assay with the data collected in the eye irritation reference chemical data bank. Resazurin is a substrate that changes color in response to metabolic activity. Isolated pig corneas were immersed for 10 min in surfactants and alcohol irritant solutions. After incubation, resorufin fluorescence was read and corneal viability was assessed. This corneal viability was compared with the maximal modified average score published in the report of ECETOC. This assay highlighted different concentration-dependent irritation potentials of the three surfactants tested, and the same results were obtained with corneas treated with the alcohols. We observed that the degree of surfactant- and alcohol-induced decrease in corneal viability, using the resazurin reduction test, was correlated with the in vivo irritancy measurements as determined by the Draize test and scored with the Modified Maximum Average Score (MMAS). This assay allowed us to classify the ocular irritancy of the tested surfactants and alcohols in the same ranking order as the Draize classification. Corneal viability measurement can be used as a potential alternative for the toxicological assessment of surfactants and alcohols. The nontoxic, nonradioactive resazurin metabolism assay allows rapid assessment of many samples with simple equipment and at reduced cost for continuous monitoring of corneal viability. This assay seems to be suitable as a toxicological screening test for eye irritation determination.

  12. Sensitivity of the speed evaluation tests of carrying the ball in youth soccer players

    Directory of Open Access Journals (Sweden)

    Rakojević Bojan

    2016-01-01

    Full Text Available This research is aimed at examining sensitivity of the speed evaluation tests while carrying the ball. The research included 76 male examinees, aged 17 years (+/- 6 months, who were divided into two qualitatively different subgroups. For determining the speed while carrying the ball, the tests of M type of carrying the ball between cones and slalom ball carrying with a pass were performed. The obtained results proved that the examinees from the group of more successful soccer players (Group 1 scored better on the applied tests when compared to the results of the group consisting of the soccer players from lower ranked clubs and who are not national team members (Group 2. High values of the result homogeneity in the two groups (for the Groups 1 and 2 it was 75% and 68.75% respectively lead to the conclusion that this ability is essential characteristics of young soccer players. The coefficient of discrimination of 0.269 for the results of M type test and 0.197 for the results of the test of carrying the ball with a pass indicate that the results of the two tests provide possibility to qualitatively distinguish youth players. Therefore, it can be concluded that as a technical element, carrying the ball significantly affects the quality of players' performance.

  13. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  14. Hindcasting to measure ice sheet model sensitivity to initial states

    Directory of Open Access Journals (Sweden)

    A. Aschwanden

    2012-12-01

    Full Text Available Recent observations of the Greenland ice sheet indicate rapid mass loss at an accelerating rate with an increasing contribution to global mean sea level. Ice sheet models are used for projections of such future contributions of ice sheets to sea level, but the quality of projections is difficult to measure directly. Realistic initial states are crucial for accurate simulations. To test initial states we use hindcasting, i.e. forcing a model with known or closely-estimated inputs for past events to see how well the output matches observations. By simulating the recent past of Greenland, and comparing to observations of ice thickness, ice discharge, surface speeds, mass loss and surface elevation changes for validation, we find that the short term model response is strongly influenced by the initial state. We show that the dynamical state can be mis-represented despite a good agreement with some observations, stressing the importance of using multiple observations. Some initial states generate good agreement with measured mass time series in the hindcast period, and good agreement with present-day kinematic fields. We suggest hindcasting as a methodology for careful validation of initial states that can be done before making projections on decadal to century time-scales.

  15. Wind climate estimation using WRF model output: method and model sensitivities over the sea

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Vincent, Claire Louise; Peña, Alfredo

    2015-01-01

    setup parameters. The results of the year-long sensitivity simulations show that the long-term mean wind speed simulated by the WRF model offshore in the region studied is quite insensitive to the global reanalysis, the number of vertical levels, and the horizontal resolution of the sea surface......High-quality tall mast and wind lidar measurements over the North and Baltic Seas are used to validate the wind climatology produced from winds simulated by the Weather, Research and Forecasting (WRF) model in analysis mode. Biases in annual mean wind speed between model and observations at heights...... around 100m are smaller than 3.2% at offshore sites, except for those that are affected by the wake of a wind farm or the coastline. These biases are smaller than those obtained by using winds directly from the reanalysis. We study the sensitivity of the WRF-simulated wind climatology to various model...

  16. A Blind Test of Hapke's Photometric Model

    Science.gov (United States)

    Helfenstein, P.; Shepard, M. K.

    2003-01-01

    Hapke's bidirectional reflectance equation is a versatile analytical tool for predicting (i.e. forward modeling) the photometric behavior of a particulate surface from the observed optical and structural properties of its constituents. Remote sensing applications of Hapke s model, however, generally seek to predict the optical and structural properties of particulate soil constituents from the observed photometric behavior of a planetary surface (i.e. inverse-modeling). Our confidence in the latter approach can be established only if we ruthlessly test and optimize it. Here, we summarize preliminary results from a blind-test of the Hapke model using laboratory measurements obtained with the Bloomsburg University Goniometer (B.U.G.). The first author selected eleven well-characterized powder samples and measured the spectrophotometric behavior of each. A subset of twenty undisclosed examples of the photometric measurement sets were sent to the second author who fit the data using the Hapke model and attempted to interpret their optical and mechanical properties from photometry alone.

  17. Testing the Correlated Random Coefficient Model*

    Science.gov (United States)

    Heckman, James J.; Schmierer, Daniel; Urzua, Sergio

    2010-01-01

    The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coe cient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coe cient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and nd evidence of sorting into schooling based on unobserved components of gains. PMID:21057649

  18. Kinetic modeling and sensitivity analysis of plasma-assisted combustion

    Science.gov (United States)

    Togai, Kuninori

    Plasma-assisted combustion (PAC) is a promising combustion enhancement technique that shows great potential for applications to a number of different practical combustion systems. In this dissertation, the chemical kinetics associated with PAC are investigated numerically with a newly developed model that describes the chemical processes induced by plasma. To support the model development, experiments were performed using a plasma flow reactor in which the fuel oxidation proceeds with the aid of plasma discharges below and above the self-ignition thermal limit of the reactive mixtures. The mixtures used were heavily diluted with Ar in order to study the reactions with temperature-controlled environments by suppressing the temperature changes due to chemical reactions. The temperature of the reactor was varied from 420 K to 1250 K and the pressure was fixed at 1 atm. Simulations were performed for the conditions corresponding to the experiments and the results are compared against each other. Important reaction paths were identified through path flux and sensitivity analyses. Reaction systems studied in this work are oxidation of hydrogen, ethylene, and methane, as well as the kinetics of NOx in plasma. In the fuel oxidation studies, reaction schemes that control the fuel oxidation are analyzed and discussed. With all the fuels studied, the oxidation reactions were extended to lower temperatures with plasma discharges compared to the cases without plasma. The analyses showed that radicals produced by dissociation of the reactants in plasma plays an important role of initiating the reaction sequence. At low temperatures where the system exhibits a chain-terminating nature, reactions of HO2 were found to play important roles on overall fuel oxidation. The effectiveness of HO2 as a chain terminator was weakened in the ethylene oxidation system, because the reactions of C 2H4 + O that have low activation energies deflects the flux of O atoms away from HO2. For the

  19. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  20. Sandia National Laboratories Small-Scale Sensitivity Testing (SSST) Report: Calcium Nitrate Mixtures with Various Fuels.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Jason Joe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2014-07-01

    Based upon the presented sensitivity data for the examined calcium nitrate mixtures using sugar and sawdust, contact handling/mixing of these materials does not present hazards greater than those occurring during handling of dry PETN powder. The aluminized calcium nitrate mixtures present a known ESD fire hazard due to the fine aluminum powder fuel. These mixtures may yet present an ESD explosion hazard, though this has not been investigated at this time. The detonability of these mixtures will be investigated during Phase III testing.

  1. Cosmic rays, aerosol formation and cloud-condensation nuclei: sensitivities to model uncertainties

    Directory of Open Access Journals (Sweden)

    E. J. Snow-Kropla

    2011-01-01

    Full Text Available The flux of cosmic rays to the atmosphere has been observed to correlate with cloud and aerosol properties. One proposed mechanism for these correlations is the "ion-aerosol clear-air" mechanism where the cosmic rays modulate atmospheric ion concentrations, ion-induced nucleation of aerosols and cloud condensation nuclei (CCN concentrations. We use a global chemical transport model with online aerosol microphysics to explore the dependence of CCN concentrations on the cosmic-ray flux. Expanding upon previous work, we test the sensitivity of the cosmic-ray/CCN connection to several uncertain parameters in the model including primary emissions, Secondary Organic Aerosol (SOA condensation and charge-enhanced condensational growth. The sensitivity of CCN to cosmic rays increases when simulations are run with decreased primary emissions, but show location-dependent behavior from increased amounts of secondary organic aerosol and charge-enhanced growth. For all test cases, the change in the concentration of particles larger than 80 nm between solar minimum (high cosmic ray flux and solar maximum (low cosmic ray flux simulations is less than 0.2%. The change in the total number of particles larger than 10 nm was larger, but always less than 1%. The simulated change in the column-integrated Ångström exponent was negligible for all test cases. Additionally, we test the predicted aerosol sensitivity to week-long Forbush decreases of cosmic rays and find that the maximum change in aerosol properties for these cases is similar to steady-state aerosol differences between the solar maximum and solar minimum. These results provide evidence that the effect of cosmic rays on CCN and clouds through the ion-aerosol clear-sky mechanism is limited by dampening from aerosol processes.

  2. Cosmic rays, aerosol formation and cloud-condensation nuclei: sensitivities to model uncertainties

    Directory of Open Access Journals (Sweden)

    E. J. Snow-Kropla

    2011-04-01

    Full Text Available The flux of cosmic rays to the atmosphere has been reported to correlate with cloud and aerosol properties. One proposed mechanism for these correlations is the "ion-aerosol clear-air" mechanism where the cosmic rays modulate atmospheric ion concentrations, ion-induced nucleation of aerosols and cloud condensation nuclei (CCN concentrations. We use a global chemical transport model with online aerosol microphysics to explore the dependence of CCN concentrations on the cosmic-ray flux. Expanding upon previous work, we test the sensitivity of the cosmic-ray/CCN connection to several uncertain parameters in the model including primary emissions, Secondary Organic Aerosol (SOA condensation and charge-enhanced condensational growth. The sensitivity of CCN to cosmic rays increases when simulations are run with decreased primary emissions, but show location-dependent behavior from increased amounts of secondary organic aerosol and charge-enhanced growth. For all test cases, the change in the concentration of particles larger than 80 nm between solar minimum (high cosmic ray flux and solar maximum (low cosmic ray flux simulations is less than 0.2 %. The change in the total number of particles larger than 10 nm was larger, but always less than 1 %. The simulated change in the column-integrated Ångström exponent was negligible for all test cases. Additionally, we test the predicted aerosol sensitivity to week-long Forbush decreases of cosmic rays and find that the maximum change in aerosol properties for these cases is similar to steady-state aerosol differences between the solar maximum and solar minimum. These results provide evidence that the effect of cosmic rays on CCN and clouds through the ion-aerosol clear-sky mechanism is limited by dampening from aerosol processes.

  3. Physical model tests for floating wind turbines

    DEFF Research Database (Denmark)

    Bredmose, Henrik; Mikkelsen, Robert Flemming; Borg, Michael

    Floating offshore wind turbines are relevant at sites where the depth is too large for the installation of a bottom fixed substructure. While 3200 bottom fixed offshore turbines has been installed in Europe (EWEA 2016), only a handful of floating wind turbines exist worldwide and it is still...... an open question which floater concept is the most economically feasible. The design of the floaters for the floating turbines relies heavily on numerical modelling. While several coupled models exist, data sets for their validation are scarce. Validation, however, is important since the turbine behaviour...... is complex due to the combined actions of aero- and hydrodynamic loads, mooring loads and blade pitch control. The present talk outlines two recent test campaigns with a floating wind turbine in waves and wind. Two floater were tested, a compact TLP floater designed at DTU (Bredmose et al 2015, Pegalajar...

  4. Sensitivity Analysis for the Decomposition of Mixed Partitioned Multivariate Models into Two Seemingly Unrelated Submodels

    Directory of Open Access Journals (Sweden)

    Eva Fišerová

    2014-06-01

    Full Text Available The paper is focused on the decomposition of mixed partitioned multivariate models into two seemingly unrelated submodels in order to obtain more efficient estimators. The multiresponses are independently normally distributed with the same covariance matrix. The partitioned multivariate model is considered either with, or without an intercept. The elimination transformation of the intercept that preserves the BLUEs of parameter matri- ces and the MINQUE of the variance components in multivariate models with and without an intercept is stated. Procedures on testing the decomposition of the partitioned model are presented. The properties of plug-in test statistics as functions of variance compo- nents are investigated by sensitivity analysis and insensitivity regions for the significance level are proposed. The insensitivity region is a safe region in the parameter space of the variance components where the approximation of the variance components can be used without any essential deterioration of the significance level of the plug-in test statistic. The behavior of plug-in test statistics and insensitivity regions is studied by simulations. 

  5. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  6. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  7. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92...... was on the Dolos breakwater with a high superstructure, where there was almost no overtopping. This case is believed to be the most dangerous one. The test of the Dolos breakwater with a low superstructure was also performed. The objective of the last part of the experiment is to investigate the influence...

  8. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    Ductile damage modeling within the Small Punch Test (SPT) is extensively investigated. The capabilities ofthe SPT to reliably estimate fracture and damage properties are thoroughly discussed and emphasis isplaced on the use of notched specimens. First, different notch profiles are analyzed...... and constraint conditionsquantified. The role of the notch shape is comprehensively examined from both triaxiality and notchfabrication perspectives. Afterwards, a methodology is presented to extract the micromechanical-basedductile damage parameters from the load-displacement curve of notched SPT samples...

  9. Skin sensitization risk assessment model using artificial neural network analysis of data from multiple in vitro assays.

    Science.gov (United States)

    Tsujita-Inoue, Kyoko; Hirota, Morihiko; Ashikaga, Takao; Atobe, Tomomi; Kouzuki, Hirokazu; Aiba, Setsuya

    2014-06-01

    The sensitizing potential of chemicals is usually identified and characterized using in vivo methods such as the murine local lymph node assay (LLNA). Due to regulatory constraints and ethical concerns, alternatives to animal testing are needed to predict skin sensitization potential of chemicals. For this purpose, combined evaluation using multiple in vitro and in silico parameters that reflect different aspects of the sensitization process seems promising. We previously reported that LLNA thresholds could be well predicted by using an artificial neural network (ANN) model, designated iSENS ver.1 (integrating in vitro sensitization tests version 1), to analyze data obtained from two in vitro tests: the human Cell Line Activation Test (h-CLAT) and the SH test. Here, we present a more advanced ANN model, iSENS ver.2, which additionally utilizes the results of antioxidant response element (ARE) assay and the octanol-water partition coefficient (LogP, reflecting lipid solubility and skin absorption). We found a good correlation between predicted LLNA thresholds calculated by iSENS ver.2 and reported values. The predictive performance of iSENS ver.2 was superior to that of iSENS ver.1. We conclude that ANN analysis of data from multiple in vitro assays is a useful approach for risk assessment of chemicals for skin sensitization.

  10. Computational model of an infant brain subjected to periodic motion simplified modelling and Bayesian sensitivity analysis.

    Science.gov (United States)

    Batterbee, D C; Sims, N D; Becker, W; Worden, K; Rowson, J

    2011-11-01

    Non-accidental head injury in infants, or shaken baby syndrome, is a highly controversial and disputed topic. Biomechanical studies often suggest that shaking alone cannot cause the classical symptoms, yet many medical experts believe the contrary. Researchers have turned to finite element modelling for a more detailed understanding of the interactions between the brain, skull, cerebrospinal fluid (CSF), and surrounding tissues. However, the uncertainties in such models are significant; these can arise from theoretical approximations, lack of information, and inherent variability. Consequently, this study presents an uncertainty analysis of a finite element model of a human head subject to shaking. Although the model geometry was greatly simplified, fluid-structure-interaction techniques were used to model the brain, skull, and CSF using a Eulerian mesh formulation with penalty-based coupling. Uncertainty and sensitivity measurements were obtained using Bayesian sensitivity analysis, which is a technique that is relatively new to the engineering community. Uncertainty in nine different model parameters was investigated for two different shaking excitations: sinusoidal translation only, and sinusoidal translation plus rotation about the base of the head. The level and type of sensitivity in the results was found to be highly dependent on the excitation type.

  11. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  12. Modeling flow in a pressure-sensitive, heterogeneous medium

    Energy Technology Data Exchange (ETDEWEB)

    Vasco, Donald W.; Minkoff, Susan E.

    2009-06-01

    Using an asymptotic methodology, including an expansion in inverse powers of {radical}{omega}, where {omega} is the frequency, we derive a solution for flow in a medium with pressure dependent properties. The solution is valid for a heterogeneous medium with smoothly varying properties. That is, the scale length of the heterogeneity must be significantly larger then the scale length over which the pressure increases from it initial value to its peak value. The resulting asymptotic expression is similar in form to the solution for pressure in a medium in which the flow properties are not functions of pressure. Both the expression for pseudo-phase, which is related to the 'travel time' of the transient pressure disturbance, and the expression for pressure amplitude contain modifications due to the pressure dependence of the medium. We apply the method to synthetic and observed pressure variations in a deforming medium. In the synthetic test we model one-dimensional propagation in a pressure-dependent medium. Comparisons with both an analytic self-similar solution and the results of a numerical simulation indicate general agreement. Furthermore, we are able to match pressure variations observed during a pulse test at the Coaraze Laboratory site in France.

  13. Sensitivity and specificity of the tilt table test in young patients with unexplained syncope.

    Science.gov (United States)

    Fouad, F M; Sitthisook, S; Vanerio, G; Maloney, J; Okabe, M; Jaeger, F; Schluchter, M; Maloney, J D

    1993-03-01

    The usefulness of the head-up tilt testing (HUT) has been previously addressed in diagnosing vasovagal neuroregulatory syncope in the teenage population. However, data concerning sensitivity and specificity is deficient due to the lack of control groups. We compared the response to HUT in young patients referred because of syncope or near syncope (n = 44, mean age 16 +/- 3 years SD) to healthy young volunteers with a normal physical examination and no previous history of syncope (n = 18, mean age 16 +/- 2 years) and to determine the sensitivity and specificity of HUT. The graded tilt protocol was performed at 15 degrees, 30 degrees, and 45 degrees (each for 2 min), and then 60 degrees for 20 minutes. Cuff blood pressure was measured every minute and lead II ECG was continuously monitored. 25 of the 44 patients (57%) developed a vasovagal response or became symptomatic after 13.8 +/- 5.7 minutes of HUT. Three of the 18 volunteers (17%) had a vasovagal response and became symptomatic after 9 +/- 3 minutes of HUT. There was no statistical difference among the four groups (with and without tilt induced vasovagal response) in terms of age and baseline hemodynamic data. The sensitivity of 20 minutes HUT was 57% and its specificity was 83%. The presyncopal hemodynamic response in patients with history of syncope that was characterized by a significant decrease in systolic blood pressure and lack of increase of diastolic blood pressure as compared with baseline and with other groups.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. The Laue diffraction method to search for a neutron EDM. Experimental test of the sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, V.V. E-mail: vfedorov@mail.pnpi.spb.ru; Lapin, E.G.; Lelievre-Berna, E.; Nesvizhevsky, V.V.; Petoukhov, A.K.; Semenikhin, S.Yu.; Soldner, T.; Tasset, F.; Voronin, V.V

    2005-01-01

    The feasibility of an experiment to search for the neutron electric dipole moment (EDM) by Laue diffraction in crystals without a center of symmetry was tested. At the PF1A beam of the ILL reactor a record time delay of {tau}{approx}2 ms for the passage of neutrons through a quartz crystal was reached for the (1 1 0) plane and diffraction angles equal to 88.5 degrees. That corresponds to an effective neutron velocity in the crystal of 20 m/s, while the velocity of the incident neutron was 800 m/s. It was shown experimentally that the value {tau}N{sup 1/2}, determining the method's sensitivity, has a maximum for the Bragg angle equal to 86 deg. The results allow us to estimate the statistical sensitivity of the method for the neutron EDM. For the PF1B beam of the ILL reactor the sensitivity can reach {approx}6x10{sup -25} e cm per day for the available quartz crystal.

  15. A sensitive bacterial-growth-based test reveals how intestinal Bacteroides meet their porphyrin requirement.

    Science.gov (United States)

    Halpern, David; Gruss, Alexandra

    2015-12-29

    Bacteroides sp. are dominant constituents of the human and animal intestinal microbiota require porphyrins (i.e., protoporphyrin IX or iron-charged heme) for normal growth. The highly stimulatory effect of porphyrins on Bacteroides growth lead us to propose their use as a potential determinant of bacterial colonization. However, showing a role for porphryins would require sensitive detection methods that work in complex samples such as feces. We devised a highly sensitive semi-quantitative porphyrin detection method (detection limit 1-4 ng heme or PPIX) that can be used to assay pure or complex biological samples, based on Bacteroides growth stimulation. The test revealed that healthy colonized or non-colonized murine and human hosts provide porphyrins in feces, which stimulate Bacteroides growth. In addition, a common microbiota constituent, Escherichia coli, is shown to be a porphyrin donor, suggesting a novel basis for intestinal bacterial interactions. A highly sensitive method to detect porphyrins based on bacterial growth is devised and is functional in complex biological samples. Host feces, independently of their microbiota, and E. coli, which are present in the intestine, are shown to be porphryin donors. The role of porphyrins as key bioactive molecules can now be assessed for their impact on Bacteroides and other bacterial populations in the gut.

  16. Self-healing mortar with pH-sensitive superabsorbent polymers: testing of the sealing efficiency by water flow tests

    Science.gov (United States)

    Gruyaert, Elke; Debbaut, Brenda; Snoeck, Didier; Díaz, Pilar; Arizo, Alejandro; Tziviloglou, Eirini; Schlangen, Erik; De Belie, Nele

    2016-08-01

    Superabsorbent polymers (SAPs) have potential to be used as healing agent in self-healing concrete due to their property to attract moisture from the environment and their capacity to promote autogenous healing. A possible drawback, however, is their uptake of mixing water during concrete manufacturing, resulting in an increased volume of macro-pores in the hardened concrete. To limit this drawback, newly developed SAPs with a high swelling and pH-sensitiveness were developed and tested within the FP7 project HEALCON. Evaluation of their self-sealing performance occurred through a water permeability test via water flow, a test method also developed within HEALCON. Three different sizes of the newly developed SAP were compared with a commercial SAP. Swelling tests in cement filtrate solution indicated that the commercial and in-house synthesized SAPs performed quite similar, but the difference between the swelling capacity at pH 9 and pH 13 is more pronounced for the self-synthesized SAPs. Moreover, in comparison to the commercial SAPs, less macro-pores are formed in the cement matrix of mixes with self-synthesized SAPs and the effect on the mechanical properties is lower, but not negligible, when using high amounts of SAPs. Although the immediate sealing effect of cracks in mortar was the highest for the commercial SAPs, the in-house made SAPs with a particle size between 400 and 600 μm performed the best with regard to crack closure (mainly CaCO3 precipitation) and self-sealing efficiency, after exposing the specimens to 28 wet-dry cycles. Some specimens could even withstand a water pressure of 2 bar.

  17. Flow analysis with WaSiM-ETH – model parameter sensitivity at different scales

    Directory of Open Access Journals (Sweden)

    J. Cullmann

    2006-01-01

    Full Text Available WaSiM-ETH (Gurtz et al., 2001, a widely used water balance simulation model, is tested for its suitability to serve for flow analysis in the context of rainfall runoff modelling and flood forecasting. In this paper, special focus is on the resolution of the process domain in space as well as in time. We try to couple model runs with different calculation time steps in order to reduce the effort arising from calculating the whole flow hydrograph at the hourly time step. We aim at modelling on the daily time step for water balance purposes, switching to the hourly time step whenever high-resolution information is necessary (flood forecasting. WaSiM-ETH is used at different grid resolutions, thus we try to become clear about being able to transfer the model in spatial resolution. We further use two different approaches for the overland flow time calculation within the sub-basins of the test watershed to gain insights about the process dynamics portrayed by the model. Our findings indicate that the model is very sensitive to time and space resolution and cannot be transferred across scales without recalibration.

  18. Parameter sensitivity analysis of stochastic models provides insights into cardiac calcium sparks.

    Science.gov (United States)

    Lee, Young-Seon; Liu, Ona Z; Hwang, Hyun Seok; Knollmann, Bjorn C; Sobie, Eric A

    2013-03-05

    We present a parameter sensitivity analysis method that is appropriate for stochastic models, and we demonstrate how this analysis generates experimentally testable predictions about the factors that influence local Ca(2+) release in heart cells. The method involves randomly varying all parameters, running a single simulation with each set of parameters, running simulations with hundreds of model variants, then statistically relating the parameters to the simulation results using regression methods. We tested this method on a stochastic model, containing 18 parameters, of the cardiac Ca(2+) spark. Results show that multivariable linear regression can successfully relate parameters to continuous model outputs such as Ca(2+) spark amplitude and duration, and multivariable logistic regression can provide insight into how parameters affect Ca(2+) spark triggering (a probabilistic process that is all-or-none in a single simulation). Benchmark studies demonstrate that this method is less computationally intensive than standard methods by a factor of 16. Importantly, predictions were tested experimentally by measuring Ca(2+) sparks in mice with knockout of the sarcoplasmic reticulum protein triadin. These mice exhibit multiple changes in Ca(2+) release unit structures, and the regression model both accurately predicts changes in Ca(2+) spark amplitude (30% decrease in model, 29% decrease in experiments) and provides an intuitive and quantitative understanding of how much each alteration contributes to the result. This approach is therefore an effective, efficient, and predictive method for analyzing stochastic mathematical models to gain biological insight.

  19. Free Fall tests for the qualification of Ultra sensitive accelerometers for space missions

    Science.gov (United States)

    Françoise, Liorzou; Pierre, Marque Jean; Santos Rodrigues, Manuel

    ONERA is developing since a long time accelerometers for space applications in the field of Earth Observations and Fundamental Physics. The more recent examples are the accelerom-eters embarked on the ESA GOCE mission launched in March 2009, dedicated to the Earth precise gravity field mapping, and the accelerometers of the CNES MICROSCOPE mission dedicated to the in orbit test of the Equivalence Principle. Those Ultra sensitive accelerome-ters are optimised for the space environment and operate over an acceleration range less than 10-6 ms-2 with an outstanding accuracy around 10-12 ms-2Hz1/2. Their testability on ground requires creating a low gravity environment in order to verify their functionalities and partially their performances before their delivery before launch. Free fall tests are the only way to ob-tain such a microgravity environment in representating space conditions. The presentation will show in a first part the results of the free fall test campaigns performed in the 120-meter high ZARM drop tower that have led to the qualification of the GOCE accelerometers. In a second part, it will describe the test plan being conducted to assess the best free-fall environment for the MICROSCOPE accelerometers. In particular, some efforts have been paid by ZARM and ONERA to develop a dedicated "free-flyer"capsule, in order to reduce the residual drag acceleration along the fall. Some results from the preliminary tests performed in preparation to the MICROSCOPE qualification campaign will be also presented.

  20. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  1. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA

    2017-05-01

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.

  2. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  3. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...... as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  4. The sensitivity and specificity of a diagnostic test of sequence-space synesthesia.

    Science.gov (United States)

    Rothen, Nicolas; Jünemann, Kristin; Mealor, Andy D; Burckhardt, Vera; Ward, Jamie

    2016-12-01

    People with sequence-space synesthesia (SSS) report stable visuo-spatial forms corresponding to numbers, days, and months (amongst others). This type of synesthesia has intrigued scientists for over 130 years but the lack of an agreed upon tool for assessing it has held back research on this phenomenon. The present study builds on previous tests by measuring the consistency of spatial locations that is known to discriminate controls from synesthetes. We document, for the first time, the sensitivity and specificity of such a test and suggest a diagnostic cut-off point for discriminating between the groups based on the area bounded by different placement attempts with the same item.

  5. Reaction norm model to describe environmental sensitivity across first lactation in dairy cattle under tropical conditions.

    Science.gov (United States)

    Bignardi, Annaiza Braga; El Faro, Lenira; Pereira, Rodrigo Junqueira; Ayres, Denise Rocha; Machado, Paulo Fernando; de Albuquerque, Lucia Galvão; Santana, Mário Luiz

    2015-10-01

    Reaction norm models have been widely used to study genotype by environment interaction (G × E) in animal breeding. The objective of this study was to describe environmental sensitivity across first lactation in Brazilian Holstein cows using a reaction norm approach. A total of 50,168 individual monthly test day (TD) milk yields (10 test days) from 7476 complete first lactations of Holstein cattle were analyzed. The statistical models for all traits (10 TDs and for 305-day milk yield) included the fixed effects of contemporary group, age of cow (linear and quadratic effects), and days in milk (linear effect), except for 305-day milk yield. A hierarchical reaction norm model (HRNM) based on the unknown covariate was used. The present study showed the presence of G × E in milk yield across first lactation of Holstein cows. The variation in the heritability estimates implies differences in the response to selection depending on the environment where the animals of this population are evaluated. In the average environment, the heritabilities for all traits were rather similar, in range from 0.02 to 0.63. The scaling effect of G × E predominated throughout most of lactation. Particularly during the first 2 months of lactation, G × E caused reranking of breeding values. It is therefore important to include the environmental sensitivity of animals according to the phase of lactation in the genetic evaluations of Holstein cattle in tropical environments.

  6. Analyzing the sensitivity of a flood risk assessment model towards its input data

    Science.gov (United States)

    Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene

    2016-11-01

    The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case