WorldWideScience

Sample records for model sensitivity studies

  1. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... Sensitivity study of reduced models of the activated sludge process, for the purposes of parameter estimation and process optimisation: Benchmark process with ASM1 and UCT reduced biological models. S du Plessis and R Tzoneva*. Department of Electrical Engineering, Cape Peninsula University of ...

  2. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    The problem of derivation and calculation of sensitivity functions for all parameters of the mass balance reduced model of the COST benchmark activated sludge plant is formulated and solved. The sensitivity functions, equations and augmented sensitivity state space models are derived for the cases of ASM1 and UCT ...

  3. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  4. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  5. Orientation sensitive deformation in Zr alloys: experimental and modeling studies

    International Nuclear Information System (INIS)

    Srivastava, D.; Keskar, N.; Manikrishna, K.V.; Dey, G.K.; Jha, S.K.; Saibaba, N.

    2016-01-01

    Zirconium alloys are used for fuel cladding and other structural components in pressurised heavy water nuclear reactors (PHWR's). Currently there is a lot of interest in developing alloys for structural components for higher temperature reactor operation. There is also need for development of cladding material with better corrosion and mechanical property of cladding material for higher and extended burn up applications. The performance of the cladding material is primarily influenced by the microstructural features of the material such as constituent phases their morphology, precipitates characteristics, nature of defects etc. Therefore, the microstructure is tailored as per the performance requirement by through controlled additions of alloying elements, thermo-mechanical- treatments. In order to obtain the desired microstructure, it is important to know the deformation behaviour of the material. Orientation dependent deformation behavior was studied in Zr using a combination of experimental and modeling (both discrete and atomistic dislocation dynamics) methods. Under the conditions of plane strain deformation, it was observed that single phase Zr, had significant extent of deformation heterogeneity based on local orientations. Discrete dislocation dynamics simulations incorporating multi slip systems had captured the orientation sensitive deformation. MD dislocations on the other hand brought the fundamental difference in various crystallographic orientations in determining the nucleating stress for the dislocations. The deformed structure has been characterized using X-ray, electron and neutron diffraction techniques. The various operating deformation mechanism will be discussed in this presentation. (author)

  6. Sensitivity of tropospheric heating rates to aerosols: A modeling study

    International Nuclear Information System (INIS)

    Hanna, A.F.; Shankar, U.; Mathur, R.

    1994-01-01

    The effect of aerosols on the radiation balance is critical to the energetics of the atmosphere. Because of the relatively long residence of specific types of aerosols in the atmosphere and their complex thermal and chemical interactions, understanding their behavior is crucial for understanding global climate change. The authors used the Regional Particulate Model (RPM) to simulate aerosols in the eastern United States in order to identify the aerosol characteristics of specific rural and urban areas these characteristics include size, concentration, and vertical profile. A radiative transfer model based on an improved δ-Eddington approximation with 26 spectral intervals spanning the solar spectrum was then used to analyze the tropospheric heating rates associated with these different aerosol distributions. The authors compared heating rates forced by differences in surface albedo associated with different land-use characteristics, and found that tropospheric heating and surface cooling are sensitive to surface properties such as albedo

  7. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  8. Model sensitivity studies of the decrease in atmospheric carbon tetrachloride

    Directory of Open Access Journals (Sweden)

    M. P. Chipperfield

    2016-12-01

    Full Text Available Carbon tetrachloride (CCl4 is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % of total, but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years. With an assumed CCl4 emission rate of 39 Gg year−1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year−1. Further progress in constraining the CCl4 budget is partly limited by

  9. Eocene climate and Arctic paleobathymetry: A tectonic sensitivity study using GISS ModelE-R

    Science.gov (United States)

    Roberts, C. D.; Legrande, A. N.; Tripati, A. K.

    2009-12-01

    The early Paleogene (65-45 million years ago, Ma) was a ‘greenhouse’ interval with global temperatures warmer than any other time in the last 65 Ma. This period was characterized by high levels of CO2, warm high-latitudes, warm surface-and-deep oceans, and an intensified hydrological cycle. Sediments from the Arctic suggest that the Eocene surface Arctic Ocean was warm, brackish, and episodically enabled the freshwater fern Azolla to bloom. The precise mechanisms responsible for the development of these conditions remain uncertain. We present equilibrium climate conditions derived from a fully-coupled, water-isotope enabled, general circulation model (GISS ModelE-R) configured for the early Eocene. We also present model-data comparison plots for key climatic variables (SST and δ18O) and analyses of the leading modes of variability in the tropical Pacific and North Atlantic regions. Our tectonic sensitivity study indicates that Northern Hemisphere climate would have been very sensitive to the degree of oceanic exchange through the seaways connecting the Arctic to the Atlantic and Tethys. By restricting these seaways, we simulate freshening of the surface Arctic Ocean to ~6 psu and warming of sea-surface temperatures by 2°C in the North Atlantic and 5-10°C in the Labrador Sea. Our results may help explain the occurrence of low-salinity tolerant taxa in the Arctic Ocean during the Eocene and provide a mechanism for enhanced warmth in the north western Atlantic. We also suggest that the formation of a volcanic land-bridge between Greenland and Europe could have caused increased ocean convection and warming of intermediate waters in the Atlantic. If true, this result is consistent with the theory that bathymetry changes may have caused thermal destabilisation of methane clathrates in the Atlantic.

  10. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  11. High-Level Waste Glass Formulation Model Sensitivity Study 2009 Glass Formulation Model Versus 1996 Glass Formulation Model

    International Nuclear Information System (INIS)

    Belsher, J.D.; Meinert, F.L.

    2009-01-01

    This document presents the differences between two HLW glass formulation models (GFM): The 1996 GFM and 2009 GFM. A glass formulation model is a collection of glass property correlations and associated limits, as well as model validity and solubility constraints; it uses the pretreated HLW feed composition to predict the amount and composition of glass forming additives necessary to produce acceptable HLW glass. The 2009 GFM presented in this report was constructed as a nonlinear optimization calculation based on updated glass property data and solubility limits described in PNNL-18501 (2009). Key mission drivers such as the total mass of HLW glass and waste oxide loading are compared between the two glass formulation models. In addition, a sensitivity study was performed within the 2009 GFM to determine the effect of relaxing various constraints on the predicted mass of the HLW glass.

  12. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  13. Sensitivity studies using the TRNSM 2 computerized model for the NRC physical protection project. Final report

    International Nuclear Information System (INIS)

    Anderson, G.M.

    1979-08-01

    A computerized model of the transportation system for shipment of nuclear fuel cycle materials is required to investigate the effects on fleet size, fleet composition and efficiency of fleet utilization resulting from changes in a variety of physical and regulatory factors, including shipping requirements, security regulations, work rules, maintenance requirements, and vehicle capacities. Such a model has been developed which provides a capability for complete sizing requirements studies of a combined aircraft and truck fleet. This report presents the results of a series of sensitivity studies performed using this model. These studies include the effects of the intinerary optimization criteria, work rules, and maintenance policies. These results demonstrate the effectiveness and versatility of the model for investigating the effects of a wide variety of physical and regulatory factors on the transportation fleet

  14. Parameter sensitivity study of a Field II multilayer transducer model on a convex transducer

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2009-01-01

    A multilayer transducer model for predicting a transducer impulse response has in earlier works been developed and combined with the Field II software. This development was tested on current, voltage, and intensity measurements on piezoceramics discs (Bæk et al. IUS 2008) and a convex 128 element...... ultrasound imaging transducer (Bæk et al. ICU 2009). The model benefits from its 1D simplicity and hasshown to give an amplitude error around 1.7‐2 dB. However, any prediction of amplitude, phase, and attenuation of pulses relies on the accuracy of manufacturer supplied material characteristics, which may...... is a quantitative calibrated model for a complete ultrasound system. This includes a sensitivity study aspresented here.Statement of Contribution/MethodsThe study alters 35 different model parameters which describe a 128 element convex transducer from BK Medical Aps. The changes are within ±20 % of the values...

  15. Sensitivity of inferred climate model skill to evaluation decisions: a case study using CMIP5 evapotranspiration

    International Nuclear Information System (INIS)

    Schwalm, Christopher R; Huntinzger, Deborah N; Michalak, Anna M; Fisher, Joshua B; Kimball, John S; Mueller, Brigitte; Zhang, Ke; Zhang Yongqiang

    2013-01-01

    Confrontation of climate models with observationally-based reference datasets is widespread and integral to model development. These comparisons yield skill metrics quantifying the mismatch between simulated and reference values and also involve analyst choices, or meta-parameters, in structuring the analysis. Here, we systematically vary five such meta-parameters (reference dataset, spatial resolution, regridding approach, land mask, and time period) in evaluating evapotranspiration (ET) from eight CMIP5 models in a factorial design that yields 68 700 intercomparisons. The results show that while model–data comparisons can provide some feedback on overall model performance, model ranks are ambiguous and inferred model skill and rank are highly sensitive to the choice of meta-parameters for all models. This suggests that model skill and rank are best represented probabilistically rather than as scalar values. For this case study, the choice of reference dataset is found to have a dominant influence on inferred model skill, even larger than the choice of model itself. This is primarily due to large differences between reference datasets, indicating that further work in developing a community-accepted standard ET reference dataset is crucial in order to decrease ambiguity in model skill. (letter)

  16. Validity of Quinpirole Sensitization Rat Model of OCD: Linking Evidence from Animal and Clinical Studies.

    Science.gov (United States)

    Stuchlik, Ales; Radostová, Dominika; Hatalova, Hana; Vales, Karel; Nekovarova, Tereza; Koprivova, Jana; Svoboda, Jan; Horacek, Jiri

    2016-01-01

    Obsessive-compulsive disorder (OCD) is a neuropsychiatric disorder with 1-3% prevalence. OCD is characterized by recurrent thoughts (obsessions) and repetitive behaviors (compulsions). The pathophysiology of OCD remains unclear, stressing the importance of pre-clinical studies. The aim of this article is to critically review a proposed animal model of OCD that is characterized by the induction of compulsive checking and behavioral sensitization to the D2/D3 dopamine agonist quinpirole. Changes in this model have been reported at the level of brain structures, neurotransmitter systems and other neurophysiological aspects. In this review, we consider these alterations in relation to the clinical manifestations in OCD, with the aim to discuss and evaluate axes of validity of this model. Our analysis shows that some axes of validity of quinpirole sensitization model (QSM) are strongly supported by clinical findings, such as behavioral phenomenology or roles of brain structures. Evidence on predictive validity is contradictory and ambiguous. It is concluded that this model is useful in the context of searching for the underlying pathophysiological basis of the disorder because of the relatively strong biological similarities with OCD.

  17. Sensitivity of hydrological modeling to meteorological data and implications for climate change studies

    International Nuclear Information System (INIS)

    Roy, L.G.; Roy, R.; Desrochers, G.E.; Vaillancourt, C.; Chartier, I.

    2008-01-01

    There are uncertainties associated with the use of hydrological models. This study aims to analyse one source of uncertainty associated with hydrological modeling, particularly in the context of climate change studies on water resources. Additional intent of this study is to compare the ability of some meteorological data sources, used in conjunction with an hydrological model, to reproduce the hydrologic regime of a watershed. A case study on a watershed of south-western Quebec, Canada using five different sources of meteorological data as input to an offline hydrological model are presented in this paper. Data used came from weather stations, NCEP reanalysis, ERA40 reanalysis and two Canadian Regional Climate Model (CRCM) runs driven by NCEP and ERA40 reanalysis, providing atmospheric driving boundary conditions to this limited-area climate model. To investigate the sensitivity of simulated streamflow to different sources of meteorological data, we first calibrated the hydrological model with each of the meteorological data sets over the 1961-1980 period. The five different sets of parameters of the hydrological model were then used to simulate streamflow of the 1981-2000 validation period with the five meteorological data sets as inputs. The 25 simulated streamflow series have been compared to the observed streamflow of the watershed. The five meteorological data sets do not have the same ability, when used with the hydrological model, to reproduce streamflow. Our results show also that the hydrological model parameters used may have an important influence on results such as water balance, but it is linked with the differences that may have in the characteristics of the meteorological data used. For climate change impacts assessments on water resources, we have found that there is an uncertainty associated with the meteorological data used to calibrate the model. For expected changes on mean annual flows of the Chateauguay River, our results vary from a small

  18. Sensitivity Studies on Revised PSA Model of KHNP Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Hyun-Gyo; Hwang, Seok-Won; Shin, Tae-Young

    2016-01-01

    Korea also performed safety revaluation for all nuclear power plants led by Korean regulatory and elicited 49 improvement factor for plants. One of those factors is Severe Accident Management Guidelines (SAMG) development, KHNP decided to develop Low Power and Shutdown(LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating plants for enhancement of guideline quality. In this paper we discuss about the effectiveness of post Fukushima equipment and improvements of each plant based on the results of revised full power PSA and newly developed LPSD PSA. Through sensitivity analysis based on revised PSA models we confirmed that the facilities installed or planned to installation as follow-up measures of Fukushima accident helped to enhance the safety of nuclear power plants. These results will provide various technical insights to scheduled studies which evaluate effectiveness of Fukushima post action items and develop accident management guideline. Also it will contribute to improve nuclear power plants safety

  19. Dynamic plantwide modeling, uncertainty and sensitivity analysis of a pharmaceutical upstream synthesis: Ibuprofen case study

    DEFF Research Database (Denmark)

    Montes, Frederico C. C.; Gernaey, Krist; Sin, Gürkan

    2018-01-01

    A dynamic plantwide model was developed for the synthesis of the Active pharmaceutical Ingredient (API) ibuprofen, following the Hoescht synthesis process. The kinetic parameters, reagents, products and by-products of the different reactions were adapted from literature, and the different process...... operations integrated until the end process, crystallization and isolation of the ibuprofen crystals. The dynamic model simulations were validated against available measurements from literature and then used as enabling tool to analyze the robustness of design space. To this end, sensitivity of the design...... space towards input disturbances and process uncertainties (from physical and model parameters) is studied using Monte Carlo simulations. The results quantify the uncertainty of the quality of product attributes, with particular focus on crystal size distribution and ibuprofen crystalized. The ranking...

  20. Sensitivity Studies on Revised PSA Model of KHNP Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun-Gyo; Hwang, Seok-Won; Shin, Tae-Young [KHNP, Daejeon (Korea, Republic of)

    2016-10-15

    Korea also performed safety revaluation for all nuclear power plants led by Korean regulatory and elicited 49 improvement factor for plants. One of those factors is Severe Accident Management Guidelines (SAMG) development, KHNP decided to develop Low Power and Shutdown(LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating plants for enhancement of guideline quality. In this paper we discuss about the effectiveness of post Fukushima equipment and improvements of each plant based on the results of revised full power PSA and newly developed LPSD PSA. Through sensitivity analysis based on revised PSA models we confirmed that the facilities installed or planned to installation as follow-up measures of Fukushima accident helped to enhance the safety of nuclear power plants. These results will provide various technical insights to scheduled studies which evaluate effectiveness of Fukushima post action items and develop accident management guideline. Also it will contribute to improve nuclear power plants safety.

  1. Laboratory measurements and model sensitivity studies of dust deposition ice nucleation

    Directory of Open Access Journals (Sweden)

    G. Kulkarni

    2012-08-01

    Full Text Available We investigated the ice nucleating properties of mineral dust particles to understand the sensitivity of simulated cloud properties to two different representations of contact angle in the Classical Nucleation Theory (CNT. These contact angle representations are based on two sets of laboratory deposition ice nucleation measurements: Arizona Test Dust (ATD particles of 100, 300 and 500 nm sizes were tested at three different temperatures (−25, −30 and −35 °C, and 400 nm ATD and kaolinite dust species were tested at two different temperatures (−30 and −35 °C. These measurements were used to derive the onset relative humidity with respect to ice (RHice required to activate 1% of dust particles as ice nuclei, from which the onset single contact angles were then calculated based on CNT. For the probability density function (PDF representation, parameters of the log-normal contact angle distribution were determined by fitting CNT-predicted activated fraction to the measurements at different RHice. Results show that onset single contact angles vary from ~18 to 24 degrees, while the PDF parameters are sensitive to the measurement conditions (i.e. temperature and dust size. Cloud modeling simulations were performed to understand the sensitivity of cloud properties (i.e. ice number concentration, ice water content, and cloud initiation times to the representation of contact angle and PDF distribution parameters. The model simulations show that cloud properties are sensitive to onset single contact angles and PDF distribution parameters. The comparison of our experimental results with other studies shows that under similar measurement conditions the onset single contact angles are consistent within ±2.0 degrees, while our derived PDF parameters have larger discrepancies.

  2. Sensitivity analysis of the boundary layer height on idealised cities (model study)

    Energy Technology Data Exchange (ETDEWEB)

    Schayes, G. [Univ. of Louvain, Louvain-la-Neuve (Belgium); Grossi, P. [Joint Research Center, Ispra (Italy)

    1997-10-01

    The behaviour of the typical diurnal variation of the atmospheric boundary layer (ABL) over cities is a complex function of very numerous environmental parameters. Two types of geographical situations have been retained: (i) inland city only surrounded by uniform fields, (ii) coastal city, thus influenced by the sea/land breeze effect. We have used the three-dimensional Thermal Vorticity-mode Mesoscale (TVM) model developed jointly by the UCL (Belgium) and JRC (Italy). In this study it has been used in 2-D mode allowing to perform many sensitivity runs. This implies that a kind of infinitely wide city has been effectively stimulated, but this does not affect the conclusions for the ABL height. The sensibility study has been performed for two turbulence closure schemes, for various assumptions for the ABL height definition in the model, and for a selected parameter, the soil water content. (LN)

  3. Sensitivity and uncertainty studies of the CRAC2 code for selected meteorological models and parameters

    International Nuclear Information System (INIS)

    Ward, R.C.; Kocher, D.C.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.

    1985-01-01

    We have studied the sensitivity of results from the CRAC2 computer code, which predicts health impacts from a reactor-accident scenario, to uncertainties in selected meteorological models and parameters. The sources of uncertainty examined include the models for plume rise and wet deposition and the meteorological bin-sampling procedure. An alternative plume-rise model usually had little effect on predicted health impacts. In an alternative wet-deposition model, the scavenging rate depends only on storm type, rather than on rainfall rate and atmospheric stability class as in the CRAC2 model. Use of the alternative wet-deposition model in meteorological bin-sampling runs decreased predicted mean early injuries by as much as a factor of 2-3 and, for large release heights and sensible heat rates, decreased mean early fatalities by nearly an order of magnitude. The bin-sampling procedure in CRAC2 was expanded by dividing each rain bin into four bins that depend on rainfall rate. Use of the modified bin structure in conjunction with the CRAC2 wet-deposition model changed all predicted health impacts by less than a factor of 2. 9 references

  4. Computational Fluid Dynamics Modeling Of Scaled Hanford Double Shell Tank Mixing - CFD Modeling Sensitivity Study Results

    International Nuclear Information System (INIS)

    Jackson, V.L.

    2011-01-01

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  5. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    Science.gov (United States)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  6. Modelling ESCOMPTE episodes with the CTM MOCAGE. Part 2 : sensitivity studies.

    Science.gov (United States)

    Dufour, A.; Amodei, M.; Brocheton, F.; Michou, M.; Peuch, V.-H.

    2003-04-01

    The multi-scale CTM MOCAGE has been applied to study pollution episodes documented during the ESCOMPTE field campain in June July 2001 in south eastern France (http://medias.obs-mip.fr/escompte). Several sensitivity studies have been performed on the basis of the 2nd IOP, covering 6 continuous days. The main objective of the present work is to investigate the question of chemical boundary conditions, as on the vertical than on the horizontal, for regional air quality simulations of several days. This issue, that often tended to be oversimplified (use of fixed continental climatology), raises increasing interest, particurlarly with the perspective of space-born tropospheric chemisry data assimilation in global model. In addition, we have examined how resolution refinements impact on the quality of the model outputs, at the surface and in altitude, against the observational database of dynamic and chemistry : resolution of the model by the way of the four nested models (from 2° to 0.01°), but also resolution of emission inventories (from 1° to 0.01°). Lastly, the impact of the refinement in the representation of chemistry has been assessed by using either detailed chemical schemes, such as RAM or SAPRC, or schemes used in global modelling, which just account for a limited amount of volatil hydrocarbon.

  7. Context Sensitive Modeling of Cancer Drug Sensitivity.

    Directory of Open Access Journals (Sweden)

    Bo-Juen Chen

    Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.

  8. Sensitivity study of the Storegga Slide tsunami using retrogressive and visco-plastic rheology models

    Science.gov (United States)

    Kim, Jihwan; Løvholt, Finn

    2016-04-01

    Enormous submarine landslides having volumes up to thousands of km3 and long run-out may cause tsunamis with widespread effects. Clay-rich landslides, such as Trænadjupet and Storegga offshore Norway commonly involve retrogressive mass and momentum release mechanisms that affect the tsunami generation. As a consequence, the failure mechanisms, soil parameters, and release rate of the retrogression are of importance for the tsunami generation. Previous attempts to model the tsunami generation due to retrogressive landslides are few, and limited to idealized conditions. Here, a visco-plastic model including additional effects such as remolding, time dependent mass release, and hydrodynamic resistance, is employed for simulating the Storegga Slide. As landslide strength parameters and their evolution in time are uncertain, it is necessary to conduct a sensitivity study to shed light on the tsunamigenic processes. The induced tsunami is simulated using Geoclaw. We also compare our tsunami simulations with recent analysis conducted using a pure retrogressive model for the landslide, as well as previously published results using a block model. The availability of paleotsunami run-up data and detailed slide deposits provides a suitable background for improved understanding of the slide mechanics and tsunami generation. The research leading to these results has received funding from the Research Council of Norway under grant number 231252 (Project TsunamiLand) and the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE).

  9. Modeling and Sensitivity Study of Consensus Algorithm-Based Distributed Hierarchical Control for DC Microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Roldan Perez, Javier

    2016-01-01

    Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes a challen......Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes...... in the communication network, continuous-time methods can be inaccurate for this kind of dynamic study. Therefore, this paper aims at modeling a complete DC MG using a discrete-time approach in order to perform a sensitivity analysis taking into account the effects of the consensus algorithm. To this end......, a generalized modeling method is proposed and the influence of key control parameters, the communication topology and the communication speed are studied in detail. The theoretical results obtained with the proposed model are verified by comparing them with the results obtained with a detailed switching...

  10. Sensitivity study of the Continuous Release Dispersion Model (CRDM) for radioactive pollutants

    International Nuclear Information System (INIS)

    Camacho, F.

    1987-08-01

    The Continuous Release Dispersion Model (CRDM) is used to calculate spatial distribution of pollutants and their radiation doses in the event of accidental releases of radioactive material from Nuclear Generation Stations. A sensitivity analysis of the CRDM was carried out to develop a method for quantifying the expected output uncertainty due to inaccuracies and uncertainties in the input values. A simulation approach was used to explore the behaviour of the sensitivity functions. It was found that the most sensitive variable is wind speed, the least sensitive is the ambient temperature, and that largest values of normalized concentrations are likely to occur for small values of wind speed and highly stable atmospheric conditions. It was also shown that an error between 10% and 25% should be expected in the output values for a 1% overall error in the input values, and this factor could be much larger in certain situations

  11. Sensitivity study of cloud/radiation interaction using a second order turbulence radiative-convective model

    International Nuclear Information System (INIS)

    Kao, C.Y.J.; Smith, W.S.

    1993-01-01

    A high resolution one-dimensional version of a second order turbulence convective/radiative model, developed at the Los Alamos National Laboratory, was used to conduct a sensitivity study of a stratocumulus cloud deck, based on data taken at San Nicolas Island during the intensive field observation marine stratocumulus phase of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE IFO), conducted during July, 1987. Initial profiles for liquid water potential temperature, and total water mixing ratio were abstracted from the FIRE data. The dependence of the diurnal behavior in liquid water content, cloud top height, and cloud base height were examined for variations in subsidence rate, sea surface temperature, and initial inversion strength. The modelled diurnal variation in the column integrated liquid water agrees quite well with the observed data, for the case of low subsidence. The modelled diurnal behavior for the height of the cloud top and base show qualitative agreement with the FIRE data, although the overall height of the cloud layer is about 200 meters too high

  12. A sensitivity study of the thermomechanical far-field model of Yucca Mountain

    International Nuclear Information System (INIS)

    Brandshaug, T.

    1991-04-01

    A sensitivity study has been conducted investigating the predicted thermal and mechanical behavior of the far-field model of a proposed nuclear waste repository at Yucca Mountain. The model input parameters and phenomena that have been investigated include areal power density, thermal conductivity, specific heat capacity, material density, pore water boiling, stratigraphic and topographic simplifications Young's modulus, Poisson's ratio, coefficient of thermal expansion, in situ stress, rock matrix cohesion, rock matrix angle of internal friction, rock joint cohesion, and rock joint angle of internal friction. Using the range in values currently associated with these parameters, predictions were obtained for rock temperatures, stresses, matrix failure, and joint activity throughout the far-field model. Results show that the range considered for the areal power density has the most significant effect on the predicted rock temperatures. The range considered for the in situ stress has the most significant effect on the prediction of rock stresses and factors-of-safety for the matrix and joints. Predictions of matrix and joint factors-of-safety are also influenced significantly by the use of stratigraphic and topographic simplifications. 16 refs., 75 figs., 13 tabs

  13. Sensitivity study of optimal CO2 emission paths using a simplified structural integrated assessment model (SIAM)

    International Nuclear Information System (INIS)

    Hasselmann, K.; Hasselmann, S.; Giering, R.; Ocana, V.; Storch, H. von

    1997-01-01

    A structurally highly simplified, globally integrated coupled climate-economic costs model SIAM (Structural Integrated Assessment Model) is used to compute optimal paths of global CO 2 emissions that minimize the net sum of climate damage and mitigation costs. It studies the sensitivity of the computed optimal emission paths. The climate module is represented by a linearized impulse-response model calibrated against a coupled ocean-atmosphere general circulation climate model and a three-dimensional global carbon-cycle model. The cost terms are presented by expressions designed with respect to input assumptions. These include the discount rates for mitigation and damage costs, the inertia of the socio-economic system, and the dependence of climate damages on the changes in temperature and the rate of change of temperature. Different assumptions regarding these parameters are believed to cause the marked divergences of existing cost-benefit analyses. The long memory of the climate system implies that very long time horizons of several hundred years need to be considered to optimize CO 2 emissions on time scales relevant for a policy of sustainable development. Cost-benefit analyses over shorter time scales of a century or two can lead to dangerous underestimates of the long term climate impact of increasing greenhouse-gas emissions. To avert a major long term global warming, CO 2 emissions need to be reduced ultimately to very low levels. This may be done slowly but should not be interpreted as providing a time cushion for inaction: the transition becomes more costly the longer the necessary mitigation policies are delayed. However, the long time horizon provides adequate flexibility for later adjustments. Short term energy conservation alone is insufficient and can be viewed only as a useful measure in support of the necessary long term transition to carbon-free energy technologies. 46 refs., 9 figs., 2 tabs

  14. Investigation in clinical potential of polarization sensitive optical coherence tomography in laryngeal tumor model study

    Science.gov (United States)

    Zhou, Xin; Oak, Chulho; Ahn, Yeh-Chan; Kim, Sung Won; Tang, Shuo

    2018-02-01

    Polarization-sensitive optical coherence tomography (PS-OCT) is capable of measuring tissue birefringence. It has been widely applied to access the birefringence in tissues such as skin and cartilage. The vocal cord tissue consists of three anatomical layers from the surface to deep inside, the epithelium that contains almost no collagen, the lamina propria that is composed with abundant collagen, and the vocalis muscle layer. Due to the variation in the organization of collagen fibers, the different tissue layers show different tissue birefringence, which can be evaluated by PS-OCT phase retardation measurement. Furthermore, collagen fibers in healthy connective tissues are usually well organized, which provides relatively high birefringence. When the collagen organization is destroyed by diseases such as tumor, the birefringence of the tissue will decrease. In this study, a rabbit laryngeal tumor model with different stages of tumor progression is investigated ex-vivo by PS-OCT. The PS-OCT images show a gradual decrease in birefringence from normal tissue to severe tumor tissue. A phase retardation slope-based analysis is conducted to distinguish the epithelium, lamina propria, and muscle layers, respectively. The phase retardation slope quantifies the birefringence in different layers. The quantitative study provides a more detailed comparison among different stages of the rabbit laryngeal tumor model. The PS-OCT result is validated by the corresponding histology images of the same samples.

  15. Sensitivity study of the wet deposition schemes in the modelling of the Fukushima accident.

    Science.gov (United States)

    Quérel, Arnaud; Quélo, Denis; Roustan, Yelva; Mathieu, Anne; Kajino, Mizuo; Sekiyama, Thomas; Adachi, Kouji; Didier, Damien; Igarashi, Yasuhito

    2016-04-01

    The Fukushima-Daiichi release of radioactivity is a relevant event to study the atmospheric dispersion modelling of radionuclides. Actually, the atmospheric deposition onto the ground may be studied through the map of measured Cs-137 established consecutively to the accident. The limits of detection were low enough to make the measurements possible as far as 250km from the nuclear power plant. This large scale deposition has been modelled with the Eulerian model ldX. However, several weeks of emissions in multiple weather conditions make it a real challenge. Besides, these measurements are accumulated deposition of Cs-137 over the whole period and do not inform of deposition mechanisms involved: in-cloud, below-cloud, dry deposition. A comprehensive sensitivity analysis is performed in order to understand wet deposition mechanisms. It has been shown in a previous study (Quérel et al, 2016) that the choice of the wet deposition scheme has a strong impact on the assessment of the deposition patterns. Nevertheless, a "best" scheme could not be highlighted as it depends on the selected criteria: the ranking differs according to the statistical indicators considered (correlation, figure of merit in space and factor 2). A possibility to explain the difficulty to discriminate between several schemes was the uncertainties in the modelling, resulting from the meteorological data for instance. Since the move of the plume is not properly modelled, the deposition processes are applied with an inaccurate activity in the air. In the framework of the SAKURA project, an MRI-IRSN collaboration, new meteorological fields at higher resolution (Sekiyama et al., 2013) were provided and allows to reconsider the previous study. An updated study including these new meteorology data is presented. In addition, a focus on several releases causing deposition in located areas during known period was done. This helps to better understand the mechanisms of deposition involved following the

  16. Environmental Impacts of a Multi-Borehole Geothermal System: Model Sensitivity Study

    Science.gov (United States)

    Krol, M.; Daemi, N.

    2017-12-01

    Problems associated with fossil fuel consumption has increased worldwide interest in discovering and developing sustainable energy systems. One such system is geothermal heating, which uses the constant temperature of the ground to heat or cool buildings. Since geothermal heating offers low maintenance, high heating/cooling comfort, and a low carbon footprint, compared to conventional systems, there has been an increasing trend in equipping large buildings with geothermal heating. However, little is known on the potential environmental impact geothermal heating can have on the subsurface, such as the creation of subsurface thermal plumes or changes in groundwater flow dynamics. In the present study, the environmental impacts of a closed-loop, ground source heat pump (GSHP) system was examined with respect to different system parameters. To do this a three-dimensional model, developed using FEFLOW, was used to examine the thermal plumes resulting from ten years of operation of a vertical closed-loop GSHP system with multiple boreholes. A required thermal load typical of an office building located in Canada was calculated and groundwater flow and heat transport in the geological formation was simulated. Consequently, the resulting thermal plumes were studied and a sensitivity analysis was conducted to determine the effect of different parameters like groundwater flow and soil type on the development and movement of thermal plumes. Since thermal plumes can affect the efficiency of a GSHP system, this study provides insight into important system parameters.

  17. SR-Site Pre-modelling: Sensitivity studies of hydrogeological model variants for the Laxemar site using CONNECTFLOW

    Energy Technology Data Exchange (ETDEWEB)

    Joyce, Steven; Hoek, Jaap; Hartley, Lee (Serco (United Kingdom)); Marsic, Niko (Kemakta Konsult AB, Stockholm (Sweden))

    2010-12-15

    This study investigated a number of potential model variants of the SR-Can hydrogeological models of the temperate period and the sensitivity of the performance measures to the chosen parameters. This will help to guide the choice of potential variants for the SR-Site project and provide an input to design premises for the underground construction of the repository. It was found that variation of tunnel backfill properties in the tunnels had a significant effect on performance measures, but in the central area, ramps and shafts it had a lesser effect for those property values chosen. Variation of tunnel EDZ properties only had minor effects on performance measures. The presence of a crown space in the deposition tunnels had a significant effect on the tunnel performance measures and a lesser effect on the rock and EDZ performance measures. The presence of a deposition hole EDZ and spalling also had an effect on the performance measures.

  18. An Equation-of-State Compositional In-Situ Combustion Model: A Study of Phase Behavior Sensitivity

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Gerritsen, M. G.; Thomsen, Per Grove

    2009-01-01

    phase behavior sensitivity for in situ combustion, a thermal oil recovery process. For the one-dimensional model we first study the sensitivity to numerical discretization errors and provide grid density guidelines for proper resolution of in situ combustion behavior. A critical condition for success...... to ignition. For a particular oil we show that the simplified approach overestimates the required air injection rate for sustained front propagation by 17% compared to the equation of state-based approach....

  19. Some Sensitivity Studies of Chemical Transport Simulated in Models of the Soil-Plant-Litter System

    Energy Technology Data Exchange (ETDEWEB)

    Begovich, C.L.

    2002-10-28

    Fifteen parameters in a set of five coupled models describing carbon, water, and chemical dynamics in the soil-plant-litter system were varied in a sensitivity analysis of model response. Results are presented for chemical distribution in the components of soil, plants, and litter along with selected responses of biomass, internal chemical transport (xylem and phloem pathways), and chemical uptake. Response and sensitivity coefficients are presented for up to 102 model outputs in an appendix. Two soil properties (chemical distribution coefficient and chemical solubility) and three plant properties (leaf chemical permeability, cuticle thickness, and root chemical conductivity) had the greatest influence on chemical transport in the soil-plant-litter system under the conditions examined. Pollutant gas uptake (SO{sub 2}) increased with change in plant properties that increased plant growth. Heavy metal dynamics in litter responded to plant properties (phloem resistance, respiration characteristics) which induced changes in the chemical cycling to the litter system. Some of the SO{sub 2} and heavy metal responses were not expected but became apparent through the modeling analysis.

  20. The Coda of the Transient Response in a Sensitive Cochlea: A Computational Modeling Study.

    Directory of Open Access Journals (Sweden)

    Yizeng Li

    2016-07-01

    Full Text Available In a sensitive cochlea, the basilar membrane response to transient excitation of any kind-normal acoustic or artificial intracochlear excitation-consists of not only a primary impulse but also a coda of delayed secondary responses with varying amplitudes but similar spectral content around the characteristic frequency of the measurement location. The coda, sometimes referred to as echoes or ringing, has been described as a form of local, short term memory which may influence the ability of the auditory system to detect gaps in an acoustic stimulus such as speech. Depending on the individual cochlea, the temporal gap between the primary impulse and the following coda ranges from once to thrice the group delay of the primary impulse (the group delay of the primary impulse is on the order of a few hundred microseconds. The coda is physiologically vulnerable, disappearing when the cochlea is compromised even slightly. The multicomponent sensitive response is not yet completely understood. We use a physiologically-based, mathematical model to investigate (i the generation of the primary impulse response and the dependence of the group delay on the various stimulation methods, (ii the effect of spatial perturbations in the properties of mechanically sensitive ion channels on the generation and separation of delayed secondary responses. The model suggests that the presence of the secondary responses depends on the wavenumber content of a perturbation and the activity level of the cochlea. In addition, the model shows that the varying temporal gaps between adjacent coda seen in experiments depend on the individual profiles of perturbations. Implications for non-invasive cochlear diagnosis are also discussed.

  1. Sensitivity studies of unsaturated groundwater flow modeling for groundwater travel time calculations at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Altman, S.J.; Ho, C.K.; Arnold, B.W.; McKenna, S.A.

    1995-01-01

    Unsaturated flow has been modeled through four cross-sections at Yucca Mountain, Nevada, for the purpose of determining groundwater particle travel times from the potential repository to the water table. This work will be combined with the results of flow modeling in the saturated zone for the purpose of evaluating the suitability of the potential repository under the criteria of 10CFR960. One criterion states, in part, that the groundwater travel time (GWTT) from the repository to the accessible environment must exceed 1,000 years along the fastest path of likely and significant radionuclide travel. Sensitivity analyses have been conducted for one geostatistical realization of one cross-section for the purpose of (1) evaluating the importance of hydrological parameters having some uncertainty and (2) examining conceptual models of flow by altering the numerical implementation of the conceptual model (dual permeability (DK) and the equivalent continuum model (ECM). Results of comparisons of the ECM and DK model are also presented in Ho et al

  2. Land Sensitivity Analysis of Degradation using MEDALUS model: Case Study of Deliblato Sands, Serbia

    Directory of Open Access Journals (Sweden)

    Kadović Ratko

    2016-12-01

    Full Text Available This paper studies the assessment of sensitivity to land degradation of Deliblato sands (the northern part of Serbia, as a special nature reserve. Sandy soils of Deliblato sands are highly sensitive to degradation (given their fragility, while the system of land use is regulated according to the law, consisting of three zones under protection. Based on the MEDALUS approach and the characteristics of the study area, four main factors were considered for evaluation: soil, climate, vegetation and management. Several indicators affecting the quality of each factor were identified. Each indicator was quantified according to its quality and given a weighting of between 1.0 and 2.0. ArcGIS 9 was utilized to analyze and prepare the layers of quality maps, using the geometric mean to integrate the individual indicator map. In turn, the geometric mean of all four quality indices was used to generate sensitivity of land degradation status map. Results showed that 56.26% of the area is classified as critical; 43.18% as fragile; 0.55% as potentially affected and 0.01% as not affected by degradation. The values of vegetation quality index, expressed as coverage, diversity of vegetation functions and management policy during the protection regime are clearly represented through correlation coefficient (0.87 and 0.47.

  3. SOX sensitivity study

    Energy Technology Data Exchange (ETDEWEB)

    Martyn, Johann [Johannes Gutenberg-Universitaet, Mainz (Germany); Collaboration: BOREXINO-Collaboration

    2016-07-01

    To this day most experimental results on neutrino oscillations can be explained in the standard three neutrino model. There are however a few experiments that show anomalous behaviour at a very short baselines. These anomalies can hypothetically be explained with the existence of one or additional more light neutrino states that do not take part in weak interactions and are thus called sterile. Although the anomalies only give a hint that such sterile neutrinos could exist the prospect for physics beyond the standard model is a major motivation to investigate the neutrino oscillations in new very short baseline experiments. The SOX (Short distance Oscillations in BoreXino) experiment will use the Borexino detector and a {sup 144}Ce source to search for sterile neutrinos via the occurance of an oscillation pattern at a baseline of several meters. This talk examines the impact of the Borexino detector systematics on the experimental sensitivity of SOX.

  4. Investigation of Wave Energy Converter Effects on Wave Fields: A Modeling Sensitivity Study in Monterey Bay CA.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Grace Chang; Jason Magalen; Craig Jones

    2014-08-01

    A n indust ry standard wave modeling tool was utilized to investigate model sensitivity to input parameters and wave energy converter ( WEC ) array deploym ent scenarios. Wave propagation was investigated d ownstream of the WECs to evaluate overall near - and far - field effects of WEC arrays. The sensitivity study illustrate d that b oth wave height and near - bottom orbital velocity we re subject to the largest pote ntial variations, each decreas ed in sensitivity as transmission coefficient increase d , as number and spacing of WEC devices decrease d , and as the deployment location move d offshore. Wave direction wa s affected consistently for all parameters and wave perio d was not affected (or negligibly affected) by varying model parameters or WEC configuration .

  5. Sensitivity Analysis of b-factor in Microwave Emission Model for Soil Moisture Retrieval: A Case Study for SMAP Mission

    Directory of Open Access Journals (Sweden)

    Dugwon Seo

    2010-05-01

    Full Text Available Sensitivity analysis is critically needed to better understand the microwave emission model for soil moisture retrieval using passive microwave remote sensing data. The vegetation b-factor along with vegetation water content and surface characteristics has significant impact in model prediction. This study evaluates the sensitivity of the b-factor, which is function of vegetation type. The analysis is carried out using Passive and Active L and S-band airborne sensor (PALS and measured field soil moisture from Southern Great Plains experiment (SGP99. The results show that the relative sensitivity of the b-factor is 86% in wet soil condition and 88% in high vegetated condition compared to the sensitivity of the soil moisture. Apparently, the b-factor is found to be more sensitive than the vegetation water content, surface roughness and surface temperature; therefore, the effect of the b-factor is fairly large to the microwave emission in certain conditions. Understanding the dependence of the b-factor on the soil and vegetation is important in studying the soil moisture retrieval algorithm, which can lead to potential improvements in model development for the Soil Moisture Active-Passive (SMAP mission.

  6. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    OpenAIRE

    Angelen, J. H.; Lenaerts, J. T. M.; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.; Broeke, M. R.; Meijgaard, E.; Smeets, C. J. P. P.

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6%) at the K-transect (west Greenland) for the period 2004–2009 is...

  7. A two dimensional modeling study of the sensitivity of ozone to radiative flux uncertainties

    International Nuclear Information System (INIS)

    Grant, K.E.; Wuebbles, D.J.

    1988-08-01

    Radiative processes strongly effect equilibrium trace gas concentrations both directly, through photolysis reactions, and indirectly through temperature and transport processes. We have used the LLNL 2-D chemical-radiative-transport model to investigate the net sensitivity of equilibrium ozone concentrations to several changes in radiative forcing. Doubling CO 2 from 300 ppmv to 600 ppmv resulted in a temperature decrease of 5 K to 8 K in the middle stratosphere along with an 8% to 16% increase in ozone in the same region. Replacing our usual shortwave scattering algorithms with a simplified Rayleigh algorithm led to a 1% to 2% increase in ozone in the lower stratosphere. Finally, modifying our normal CO 2 cooling rates by corrections derived from line-by-line calculations resulted in several regions of heating and cooling. We observed temperature changes on the order of 1 K to 1.5 K with corresponding changes of 0.5% to 1.5% in O 3 . Our results for doubled CO 2 compare favorably with those by other authors. Results for our two perturbation scenarios stress the need for accurately modeling radiative processes while confirming the general validity of current models. 15 refs., 5 figs

  8. Application of the CIPP model in the study of factors that promote intercultural sensitivity

    Directory of Open Access Journals (Sweden)

    Ruiz-Bernardo, Paola

    2012-10-01

    Full Text Available The present study proposes a group of factors (related to self, context and process favouring the development of intercultural sensitivity. A social diagnosis was performed in the Spanish province of Castellón in order to identify these factors by means of a correlational study. A non-probabilistic but representative sample consisting of 995 people from 37 different countries living in this province was used. Data were collected by means of an adaptation of the scale proposed by Chen and Starosta (2000 for the assessment of intercultural sensitivity. Results showed four profiles, and their main characteristics were studied. Variables such as country of origin, gender, academic background, number of languages spoken, or the experience of living in a foreign country revealed to have a positive influence on the development of this attitude. El presente artículo propone un conjunto de los factores (personales, contextuales y de proceso que favorecen el desarrollo de la sensibilidad intercultural. Para identificar dichos factores se ha realizado un diagnóstico social en la provincia de Castellón (España. Este estudio de tipo descriptivo de carácter correlacional se ha concretado con una muestra de 995 personas de 37 nacionalidades diferentes, constituyendo una muestra representativa, caracterizada por ser de tipo fortuito o accidental. Para recoger la información se ha utilizado una adaptación de la escala de sensibilidad intercultural de Chen y Starosta (2000. El análisis de datos ha permitido identificar cuatro perfiles, de los cuales se han estudiado sus principales características y se ha podido concluir que variables tales como la condición de origen, el sexo, la formación, la cantidad de lenguas que habla o el haber vivido en otro país influyen positivamente para el desarrollo de esta actitud.

  9. Variability of indicator values for ozone production sensitivity: a model study in Switzerland and San Joaquin Valley (California)

    International Nuclear Information System (INIS)

    Andreani-Aksoyoglu, S.; Keller, J.; Prevot, A.S.H.; Chenghsuan Lu; Chang, J.S.

    2001-01-01

    The threshold values of indicator species and ratios delineating the transition between NO x and VOC sensitivity of ozone formation are assumed to be universal by various investigators. However, our previous studies suggested that threshold values might vary according to the locations and conditions. In this study, threshold values derived from various model simulations at two different locations (the area of Switzerland by UAM Model and San Joaquin Valley of Central California by SAQM Model) are examined using a new approach for defining NO x and VOC sensitive regimes. Possible definitions for the distinction of NO x and VOC sensitive ozone production regimes are given. The dependence of the threshold values for indicators and indicator ratios such as NO y , O 3 /NO z , HCHO/NO y , and H 2 O 2 /HNO 3 on the definition of NO x and VOC sensitivity is discussed. Then the variations of threshold values under low emission conditions and in two different days are examined in both areas to check whether the models respond consistently to changes in environmental conditions. In both cases, threshold values are shifted similarly when emissions are reduced. Changes in the wind fields and aging of the photochemical oxidants seem to cause the day-to-day variation of the threshold values. O 3 /NO z and HCHO/NO y indicators are predicted to be unsatisfactory to separate the NO x and VOC sensitive regimes. Although NO y and H 2 O 2 /HNO 3 provide a good separation of the two regimes, threshold values are affected by changes in the environmental conditions studied in this work. (author)

  10. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  11. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    Directory of Open Access Journals (Sweden)

    C. Marrero

    2009-04-01

    Full Text Available In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain. The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW. A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, definition or not of nested grids, horizontal resolution and number of vertical levels. The Factor Separation Method was applied in order to identify the major model sensitivity parameters under this unusual meteorological situation. Results associated to percentage changes relatives to a control run simulation demonstrated that boundary layer and surface layer schemes, horizontal resolutions, hydrostaticity option and nesting grid activation were the model configuration parameters with the greatest impact on the 48 h maximum 10 m horizontal wind speed solution.

  12. Remote sensing of mineral dust aerosol using AERI during the UAE2: A modeling and sensitivity study

    Science.gov (United States)

    Hansell, R. A.; Liou, K. N.; Ou, S. C.; Tsay, S. C.; Ji, Q.; Reid, J. S.

    2008-09-01

    Numerical simulations and sensitivity studies have been performed to assess the potential for using brightness temperature spectra from a ground-based Atmospheric Emitted Radiance Interferometer (AERI) during the United Arab Emirates Unified Aerosol Experiment (UAE2) for detecting/retrieving mineral dust aerosol. A methodology for separating dust from clouds and retrieving the dust IR optical depths was developed by exploiting differences between their spectral absorptive powers in prescribed thermal IR window subbands. Dust microphysical models were constructed using in situ data from the UAE2 and prior field studies while composition was modeled using refractive index data sets for minerals commonly observed around the UAE region including quartz, kaolinite, and calcium carbonate. The T-matrix, finite difference time domain (FDTD), and Lorenz-Mie light scattering programs were employed to calculate the single scattering properties for three dust shapes: oblate spheroids, hexagonal plates, and spheres. We used the Code for High-resolution Accelerated Radiative Transfer with Scattering (CHARTS) radiative transfer program to investigate sensitivity of the modeled AERI spectra to key dust and atmospheric parameters. Sensitivity studies show that characterization of the thermodynamic boundary layer is crucial for accurate AERI dust detection/retrieval. Furthermore, AERI sensitivity to dust optical depth is manifested in the strong subband slope dependence of the window region. Two daytime UAE2 cases were examined to demonstrate the present detection/retrieval technique, and we show that the results compare reasonably well to collocated AERONET Sun photometer/MPLNET micropulse lidar measurements. Finally, sensitivity of the developed methodology to the AERI's estimated MgCdTe detector nonlinearity was evaluated.

  13. Development of a model system to study leukotriene-induced modification of radiation sensitivity in mammalian cells

    Energy Technology Data Exchange (ETDEWEB)

    Walden, Jr, T L; Holahan, Jr, E V; Catravas, G N

    1986-01-01

    Leukotrienes (LT) are an important class of biological mediators for which no information exists concerning their synthesis following a radiation insult or on their ability to modify cellular response to a subsequent radiation exposure. Results are presented which illustrate that the Chinese hamster lung fibroblast cell line, V79A03, is useful as a model system to study the metabolic fate of leukotrienes and the effect of LT on radiation sensitivity of mammalian cells in vitro. (U.K.).

  14. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    OpenAIRE

    Marrero, C.; Jorba, O.; Cuevas, E.; Baldasano, J. M.

    2009-01-01

    In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain). The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW). A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, d...

  15. Sensitivity of Sahelian Precipitation to Desert Dust under ENSO variability: a regional modeling study

    Science.gov (United States)

    Jordan, A.; Zaitchik, B. F.; Gnanadesikan, A.

    2016-12-01

    Mineral dust is estimated to comprise over half the total global aerosol burden, with a majority coming from the Sahara and Sahel region. Bounded by the Sahara Desert to the north and the Sahelian Savannah to the south, the Sahel experiences high interannual rainfall variability and a short rainy season during the boreal summer months. Observation-based data for the past three decades indicates a reduced dust emission trend, together with an increase in greening and surface roughness within the Sahel. Climate models used to study regional precipitation changes due to Saharan dust yield varied results, both in sign convention and magnitude. Inconsistency of model estimates drives future climate projections for the region that are highly varied and uncertain. We use the NASA-Unified Weather Research and Forecasting (NU-WRF) model to quantify the interaction and feedback between desert dust aerosol and Sahelian precipitation. Using nested domains at fine spatial resolution we resolve changes to mesoscale atmospheric circulation patterns due to dust, for representative phases of El Niño-Southern Oscillation (ENSO). The NU-WRF regional earth system model offers both advanced land surface data and resolvable detail of the mechanisms of the impact of Saharan dust. Results are compared to our previous work assessed over the Western Sahel using the Geophysical Fluid Dynamics Laboratory (GFDL) CM2Mc global climate model, and to other previous regional climate model studies. This prompts further research to help explain the dust-precipitation relationship and recent North African dust emission trends. This presentation will offer a quantitative analysis of differences in radiation budget, energy and moisture fluxes, and atmospheric dynamics due to desert dust aerosol over the Sahel.

  16. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  17. Sensitivity analysis of the meteorological model applied in the German risk study (DRS)

    International Nuclear Information System (INIS)

    Vogt, S.

    1982-01-01

    In the first part of this paper it will be shown how the influence of uncertainties in estimation on risk statements is determined using methods of the probability theory. In particular the parameters contained in the dispersion model are studied more thoroughly. In the second part, based on the knowledge gathered in the previous investigations, new and more realistic best estimate values will be proposed for four selected parameters to be used in future work. The modifications in the risk statements by these new parameter values will be commented upon

  18. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    Science.gov (United States)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  19. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications

    International Nuclear Information System (INIS)

    Ericsson, Lars O.; Holmen, Johan

    2010-12-01

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  20. Sensitivity studies and a simple ozone perturbation experiment with a truncated two-dimensional model of the stratosphere

    Science.gov (United States)

    Stordal, Frode; Garcia, Rolando R.

    1987-01-01

    The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.

  1. Drug-sensitive reward in crayfish: an invertebrate model system for the study of SEEKING, reward, addiction, and withdrawal.

    Science.gov (United States)

    Huber, Robert; Panksepp, Jules B; Nathaniel, Thomas; Alcaro, Antonio; Panksepp, Jaak

    2011-10-01

    In mammals, rewarding properties of drugs depend on their capacity to activate appetitive motivational states. With the underlying mechanisms strongly conserved in evolution, invertebrates have recently emerged as a powerful new model in addiction research. In crayfish natural reward has proven surprisingly sensitive to human drugs of abuse, opening an unlikely avenue of research into the basic biological mechanisms of drug addiction. In a series of studies we first examined the presence of natural reward systems in crayfish, then characterized its sensitivity to a wide range of human drugs of abuse. A conditioned place preference (CPP) paradigm was used to demonstrate that crayfish seek out those environments that had previously been paired with the psychostimulants cocaine and amphetamine, and the opioid morphine. The administration of amphetamine exerted its effects at a number of sites, including the stimulation of circuits for active exploratory behaviors (i.e., SEEKING). A further study examined morphine-induced reward, extinction and reinstatement in crayfish. Repeated intra-circulatory infusions of morphine served as a reward when paired with distinct visual or tactile cues. Morphine-induced CPP was extinguished after repeated saline injections. Following this extinction phase, morphine-experienced crayfish were once again challenged with the drug. The priming injections of morphine reinstated CPP at all tested doses, suggesting that morphine-induced CPP is unrelenting. In an exploration of drug-associated behavioral sensitization in crayfish we concurrently mapped measures of locomotion and rewarding properties of morphine. Single and repeated intra-circulatory infusions of morphine resulted in persistent locomotory sensitization, even 5 days following the infusion. Moreover, a single dose of morphine was sufficient to induce long-term behavioral sensitization. CPP for morphine and context-dependent cues could not be disrupted over a drug free period of 5

  2. A Sensitivity Study on Modeling Black Carbon in Snow and its Radiative Forcing over the Arctic and Northern China

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Yun; Wang, Hailong; Zhang, Rudong; Flanner, M. G.; Rasch, Philip J.

    2014-06-02

    Black carbon in snow (BCS) simulated in the Community Atmosphere Model (CAM5) is evaluated against measurements over Northern China and the Arctic, and its sensitivity to atmospheric deposition and two parameters that affect post-depositional enrichment is explored. The BCS concentration is overestimated (underestimated) by a factor of two in Northern China (Arctic) in the default model, but agreement with observations is good over both regions in the simulation with improvements in BC transport and deposition. Sensitivity studies indicate that uncertainty in the melt-water scavenging efficiency (MSE) parameter substantially affects BCS and its radiative forcing (by a factor of 2-7) in the Arctic through post-depositional enrichment. The MSE parameter has a relatively small effect on the magnitude of BCS seasonal cycle but can alter its phase in Northern China. The impact of the snow aging scaling factor (SAF) on BCS, partly through the post-depositional enrichment effect, shows more complex latitudinal and seasonal dependence. Similar to MSE, SAF affects more significantly the magnitude (phase) of BCS season cycle over the Arctic (Northern China). While uncertainty associated with the representation of BC transport and deposition processes in CAM5 is more important than that associated with the two snow model parameters in Northern China, the two uncertainties have comparable effect in the Arctic.

  3. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  4. Sensitivity Studies on the Influence of Aerosols on Cloud and Precipitation Development Using WRF Mesoscale Model Simulations

    Science.gov (United States)

    Thompson, G.; Eidhammer, T.; Rasmussen, R.

    2011-12-01

    Using the WRF model in simulations of shallow and deep precipitating cloud systems, we investigated the sensitivity to aerosols initiating as cloud condensation and ice nuclei. A global climatological dataset of sulfates, sea salts, and dust was used as input for a control experiment. Sensitivity experiments with significantly more polluted conditions were conducted to analyze the resulting impacts to cloud and precipitation formation. Simulations were performed using the WRF model with explicit treatment of aerosols added to the Thompson et al (2008) bulk microphysics scheme. The modified scheme achieves droplet formation using pre-tabulated CCN activation tables provided by a parcel model. The ice nucleation is parameterized as a function of dust aerosols as well as homogeneous freezing of deliquesced aerosols. The basic processes of aerosol activation and removal by wet scavenging are considered, but aerosol characteristic size or hygroscopicity does not change due to evaporating droplets. In other words, aerosol processing was ignored. Unique aspects of this study include the usage of one to four kilometer grid spacings and the direct parameterization of ice nucleation from aerosols rather than typical temperature and/or supersaturation relationships alone. Initial results from simulations of a deep winter cloud system and its interaction with significant orography show contrasting sensitivities in regions of warm rain versus mixed liquid and ice conditions. The classical view of higher precipitation amounts in relatively clean maritime clouds with fewer but larger droplets is confirmed for regions dominated by the warm-rain process. However, due to complex interactions with the ice phase and snow riming, the simulations revealed the reverse situation in high terrain areas dominated by snow reaching the surface. Results of other cloud systems will be summarized at the conference.

  5. Weather Research and Forecasting Model Wind Sensitivity Study at Edwards Air Force Base, CA

    Science.gov (United States)

    Watson, Leela R.; Bauman, William H., III; Hoeth, Brian

    2009-01-01

    This abstract describes work that will be done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting "wind cycling" cases at Edwards Air Force Base, CA (EAFB), in which the wind speeds and directions oscillate among towers near the EAFB runway. The Weather Research and Forecasting (WRF) model allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. The goal of this project is to assess the different configurations available and determine which configuration will best predict surface wind speed and direction at EAFB.

  6. Validity of Quinpirole Sensitization Rat Model of OCD: Linking Evidence from Animal and Clinical Studie

    Czech Academy of Sciences Publication Activity Database

    Stuchlík, Aleš; Radostová, Dominika; Hatalová, Hana; Valeš, Karel; Nekovářová, Tereza; Kopřivová, J.; Svoboda, Jan; Horáček, J.

    2016-01-01

    Roč. 10, Oct 26 (2016), č. článku 209. ISSN 1662-5153 R&D Projects: GA MZd(CZ) NV15-34524A Institutional support: RVO:67985823 Keywords : OCD * quinpirole * animal model * brain circuits * rat * human Subject RIV: FH - Neurology Impact factor: 3.104, year: 2016

  7. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    Science.gov (United States)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  8. A Toolkit to Study Sensitivity of the Geant4 Predictions to the Variations of the Physics Model Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fields, Laura [Fermilab; Genser, Krzysztof [Fermilab; Hatcher, Robert [Fermilab; Kelsey, Michael [SLAC; Perdue, Gabriel [Fermilab; Wenzel, Hans [Fermilab; Wright, Dennis H. [SLAC; Yarba, Julia [Fermilab

    2017-08-21

    Geant4 is the leading detector simulation toolkit used in high energy physics to design detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and phenomenological predictions with physically motivated parameters estimated by theoretical calculation or measurement. Because these models are tuned to cover a very wide range of possible simulation tasks, they may not always be optimized for a given process or a given material. This raises several critical questions, e.g. how sensitive Geant4 predictions are to the variations of the model parameters, or what uncertainties are associated with a particular tune of a Geant4 physics model, or a group of models, or how to consistently derive guidance for Geant4 model development and improvement from a wide range of available experimental data. We have designed and implemented a comprehensive, modular, user-friendly software toolkit to study and address such questions. It allows one to easily modify parameters of one or several Geant4 physics models involved in the simulation, and to perform collective analysis of multiple variants of the resulting physics observables of interest and comparison against a variety of corresponding experimental data. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented and illustrated with results obtained with Geant4 key hadronic models.

  9. Probabilistic Design of Wind Turbine Structures: Design Studies and Sensitivities to Model Parameters

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    : decrease of conservatism level, improvement of design procedures, and development of innovative structural systems that suit well for large wind turbines. The increasing size of the structure introduces new problems that were not present for small structures. These problems include: (i) the preparation...... substructures. In addition to being aggressive, conditions for offshore environments and the associated models are highly uncertain. Appropriate statistical methodologies should be used in order to design robust structures, which are structures whose engineering performance is not significantly affected....... These research areas are differentially implemented through tasks on various wind turbine structures (shaft, jacket, semi-floater, monopile, and grouted joint). In particular the following research questions are answered: How are extreme and fatigue loads on a given structure influenced by the design of other...

  10. Geochemistry Model Abstraction and Sensitivity Studies for the 21 PWR CSNF Waste Package

    International Nuclear Information System (INIS)

    Bernot, P.; LeStrange, S.; Thomas, E.; Zarrabi, K.; Arthur, S.

    2002-01-01

    The CSNF geochemistry model abstraction, as directed by the TWP (BSC 2002b), was developed to provide regression analysis of EQ6 cases to obtain abstracted values of pH (and in some cases HCO 3 - concentration) for use in the Configuration Generator Model. The pH of the system is the controlling factor over U mineralization, CSNF degradation rate, and HCO 3 - concentration in solution. The abstraction encompasses a large variety of combinations for the degradation rates of materials. The ''base case'' used EQ6 simulations looking at differing steel/alloy corrosion rates, drip rates, and percent fuel exposure. Other values such as the pH/HCO 3 - dependent fuel corrosion rate and the corrosion rate of A516 were kept constant. Relationships were developed for pH as a function of these differing rates to be used in the calculation of total C and subsequently, the fuel rate. An additional refinement to the abstraction was the addition of abstracted pH values for cases where there was limited O 2 for waste package corrosion and a flushing fluid other than J-13, which has been used in all EQ6 calculation up to this point. These abstractions also used EQ6 simulations with varying combinations of corrosion rates of materials to abstract the pH (and HCO 3 - in the case of the limiting O 2 cases) as a function of WP materials corrosion rates. The goodness of fit for most of the abstracted values was above an R 2 of 0.9. Those below this value occurred during the time at the very beginning of WP corrosion when large variations in the system pH are observed. However, the significance of F-statistic for all the abstractions showed that the variable relationships are significant. For the abstraction, an analysis of the minerals that may form the ''sludge'' in the waste package was also presented. This analysis indicates that a number a different iron and aluminum minerals may form in the waste package other than those described in the EQ6 output files which are based on the use

  11. Repository design sensitivity study: Engineering study report

    International Nuclear Information System (INIS)

    1987-01-01

    A preliminary sensitivity study of the salt repository design has been performed to identify critical site and design parameters to help guide future site characterization and design optimization activities. The study considered the SCP-conceptual design at the Deaf Smith County site in Texas with the horizontal waste package emplacement mode as the base case. Relative to this base case, parameter variations were compared. Limited studies were performed which considered the vertical emplacement mode geometry. The report presents the reference data base and design parameters on which the study was based (including the range of parameters that might be expected). Detailed descriptions of the numerical modeling methods and assumptions are included for the thermal, thermomechanical and hydrogeological analyses. The impacts of parameter variations on the sensitivity of the rock mass response are discussed. Recommendations are provided to help guide site characterization activities and advanced conceptual design optimization activities. 47 refs., 119 refs., 22 tabs

  12. Random vibration sensitivity studies of modeling uncertainties in the NIF structures

    International Nuclear Information System (INIS)

    Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.

    1996-01-01

    The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m 2 at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 microm ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10 -10 g 2 /Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 microrad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed

  13. The sensitivity of biological finite element models to the resolution of surface geometry: a case study of crocodilian crania

    Directory of Open Access Journals (Sweden)

    Matthew R. McCurry

    2015-06-01

    Full Text Available The reliability of finite element analysis (FEA in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context.

  14. Sensitivity of the WRF model to the lower boundary in an extreme precipitation event - Madeira island case study

    Science.gov (United States)

    Teixeira, J. C.; Carvalho, A. C.; Carvalho, M. J.; Luna, T.; Rocha, A.

    2014-08-01

    The advances in satellite technology in recent years have made feasible the acquisition of high-resolution information on the Earth's surface. Examples of such information include elevation and land use, which have become more detailed. Including this information in numerical atmospheric models can improve their results in simulating lower boundary forced events, by providing detailed information on their characteristics. Consequently, this work aims to study the sensitivity of the weather research and forecast (WRF) model to different topography as well as land-use simulations in an extreme precipitation event. The test case focused on a topographically driven precipitation event over the island of Madeira, which triggered flash floods and mudslides in the southern parts of the island. Difference fields between simulations were computed, showing that the change in the data sets produced statistically significant changes to the flow, the planetary boundary layer structure and precipitation patterns. Moreover, model results show an improvement in model skill in the windward region for precipitation and in the leeward region for wind, in spite of the non-significant enhancement in the overall results with higher-resolution data sets of topography and land use.

  15. Significance of uncertainties derived from settling tank model structure and parameters on predicting WWTP performance - A global sensitivity analysis study

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2011-01-01

    Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... and from the last aerobic bioreactor upstream to the SST (Garrett/hydraulic method). For model structure uncertainty, two one-dimensional secondary settling tank (1-D SST) models are assessed, including a first-order model (the widely used Takács-model), in which the feasibility of using measured...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based...

  16. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil.

    Science.gov (United States)

    Battisti, R; Sentelhas, P C; Boote, K J

    2018-05-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO 2 ] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha -1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO 2 ] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO 2 .

  17. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil

    Science.gov (United States)

    Battisti, R.; Sentelhas, P. C.; Boote, K. J.

    2017-12-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO2] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha-1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO2] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO2.

  18. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil

    Science.gov (United States)

    Battisti, R.; Sentelhas, P. C.; Boote, K. J.

    2018-05-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO2] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha-1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO2] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO2.

  19. The Influence of Climate Change on Atmospheric Deposition of Mercury in the Arctic—A Model Sensitivity Study

    Science.gov (United States)

    Hansen, Kaj M.; Christensen, Jesper H.; Brandt, Jørgen

    2015-01-01

    Mercury (Hg) is a global pollutant with adverse health effects on humans and wildlife. It is of special concern in the Arctic due to accumulation in the food web and exposure of the Arctic population through a rich marine diet. Climate change may alter the exposure of the Arctic population to Hg. We have investigated the effect of climate change on the atmospheric Hg transport to and deposition within the Arctic by making a sensitivity study of how the atmospheric chemistry-transport model Danish Eulerian Hemispheric Model (DEHM) reacts to climate change forcing. The total deposition of Hg to the Arctic is 18% lower in the 2090s compared to the 1990s under the applied Special Report on Emissions Scenarios (SRES-A1B) climate scenario. Asia is the major anthropogenic source area (25% of the deposition to the Arctic) followed by Europe (6%) and North America (5%), with the rest arising from the background concentration, and this is independent of the climate. DEHM predicts between a 6% increase (Status Quo scenario) and a 37% decrease (zero anthropogenic emissions scenario) in Hg deposition to the Arctic depending on the applied emission scenario, while the combined effect of future climate and emission changes results in up to 47% lower Hg deposition. PMID:26378551

  20. The effect of alternative seismotectonic models on PSHA results - a sensitivity study for two sites in Israel

    Science.gov (United States)

    Avital, Matan; Kamai, Ronnie; Davis, Michael; Dor, Ory

    2018-02-01

    We present a full probabilistic seismic hazard analysis (PSHA) sensitivity analysis for two sites in southern Israel - one in the near field of a major fault system and one farther away. The PSHA analysis is conducted for alternative source representations, using alternative model parameters for the main seismic sources, such as slip rate and Mmax, among others. The analysis also considers the effect of the ground motion prediction equation (GMPE) on the hazard results. In this way, the two types of epistemic uncertainty - modelling uncertainty and parametric uncertainty - are treated and addressed. We quantify the uncertainty propagation by testing its influence on the final calculated hazard, such that the controlling knowledge gaps are identified and can be treated in future studies. We find that current practice in Israel, as represented by the current version of the building code, grossly underestimates the hazard, by approximately 40 % in short return periods (e.g. 10 % in 50 years) and by as much as 150 % in long return periods (e.g. 10E-5). The analysis shows that this underestimation is most probably due to a combination of factors, including source definitions as well as the GMPE used for analysis.

  1. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  2. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  3. Electromyographic Study of Differential Sensitivity to Succinylcholine of the Diaphragm, Laryngeal and Somatic Muscles: A Swine Model

    Directory of Open Access Journals (Sweden)

    I-Cheng Lu

    2010-12-01

    Full Text Available Neuromuscular blocking agents (NMBAs might diminish the electromyography signal of the vocalis muscles during intraoperative neuromonitoring of the recurrent laryngeal nerve. The aim of this study was to compare differential sensitivity of different muscles to succinylcholine in a swine model, and to realize the influence of NMBAs on neuromonitoring. Six male Duroc-Landrace piglets were anesthetized with thiamylal and underwent tracheal intubation without the use of an NMBA. The left recurrent laryngeal nerve, the spinal accessory nerve, the right phrenic nerve and the brachial plexus were stimulated. Evoked potentials (electromyography signal of four muscle groups were elicited from needle electrodes before and after intravenous succinylcholine bolus (1.0 mg/kg. Recorded muscles included the vocalis muscles, trapezius muscle, diaphragm and triceps brachii muscles. The onset time and 80% recovery of control response were recorded and analyzed. The testing was repeated after 30 minutes. The onset time of neuromuscular blocking for the vocalis muscles, trapezius muscle, diaphragm and triceps brachii muscle was 36.3 ± 6.3 seconds, 38.8 ± 14.9 seconds, 52.5 ± 9.7 seconds and 45.0 ± 8.2 seconds during the first test; and 49.3 ± 10.8 seconds, 40.0 ± 12.2 seconds, 47.5 ± 11.9 seconds and 41.3 ± 10.1 seconds during the second test. The 80% recovery of the control response for each muscle was 18.3 ± 2.7 minutes, 16.5±6.9 minutes, 8.1±2.5 minutes and 14.8±2.9 minutes during the first test; and 21.5±3.8 minutes, 12.5 ± 4.3 minutes, 10.5 ± 3.1 minutes and 16.4 ± 4.2 minutes during the second test. The sensitivity of the muscles to succinylcholine, ranked in order, was: the vocalis muscles, the triceps brachii muscle, the trapezius muscle and the diaphragm. We demonstrated a useful and reliable animal model to investigate the effects of NMBAs on intraoperative neuromonitoring. Extrapolation of these data to humans should be done with caution.

  4. Skin care products can aggravate epidermal function: studies in a murine model suggest a pathogenic role in sensitive skin.

    Science.gov (United States)

    Li, Zhengxiao; Hu, Lizhi; Elias, Peter M; Man, Mao-Qiang

    2018-02-01

    Sensitive skin is defined as a spectrum of unpleasant sensations in response to a variety of stimuli. However, only some skin care products provoke cutaneous symptoms in individuals with sensitive skin. Hence, it would be useful to identify products that could provoke cutaneous symptoms in individuals with sensitive skin. To assess whether vehicles, as well as certain branded skin care products, can alter epidermal function following topical applications to normal mouse skin. Following topical applications of individual vehicle or skin care product to C57BL/6J mice twice daily for 4 days, transepidermal water loss (TEWL) rates, stratum corneum (SC) hydration and skin surface pH were measured on treated versus untreated mouse skin with an MPA5 device and pH 900 pH meter. Our results show that all tested products induced abnormalities in epidermal functions of varying severity, including elevations in TEWL and skin surface pH, and reduced SC hydration. Our results suggest that mice can serve as a predictive model that could be used to evaluate the potential safety of skin care products in humans with sensitive skin. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Sensitivity study of a method for updating a finite element model of a nuclear power station cooling tower

    International Nuclear Information System (INIS)

    Billet, L.

    1994-01-01

    The Research and Development Division of Electricite de France is developing a surveillance method of cooling towers involving on-site wind-induced measurements. The method is supposed to detect structural damage in the tower. The damage is identified by tuning a finite element model of the tower on experimental mode shapes and eigenfrequencies. The sensitivity of the method was evaluated through numerical tests. First, the dynamic response of a damaged tower was simulated by varying the stiffness of some area of the model shell (from 1 % to 24 % of the total shell area). Second, the structural parameters of the undamaged cooling tower model were updated in order to make the output of the undamaged model as close as possible to the synthetic experimental data. The updating method, based on the minimization of the differences between experimental modal energies and modal energies calculated by the model, did not detect a stiffness change over less than 3 % of the shell area. Such a sensitivity is thought to be insufficient to detect tower cracks which behave like highly localized defaults. (author). 8 refs., 9 figs., 6 tabs

  6. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    storage systems, where the actual values of the data is not relevant for the behavior of the system. For many systems the values are important. For instance the control flow of the system can be dependent on the input values. We call this type of system data sensitive, as the execution is sensitive...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...

  7. Sensitivity study on the parameters of the regional hydrology model for the Nevada nuclear waste storage investigations

    International Nuclear Information System (INIS)

    Iman, R.L.; Davenport, J.M.; Waddell, R.K.; Stephens, H.P.; Leap, D.I.

    1979-01-01

    Statistical methodology has been applied to the investigation of the regional hydrologic systems of a large area encompassing the Nevada Test Site (NTS) as a part of the overall evaluation of the NTS for deep geologic disposal of nuclear waste. Statistical techniques including Latin hypercube sampling were used to perform a sensitivity analysis on a two-dimensional finite-element code of 16 geohydrologic zones used to model the regional ground-water flow system. The Latin hypercube sample has been modified to include correlations between corresponding variables from zone to zone. From the results of sensitivity analysis it was found that: (1) the ranking of the relative importance of input variables between locations within the same geohydrologic zone were similar, but not identical; and (2) inclusion of a correlation structure for input variables had a significant effect on the ranking of their relative importance. The significance of these results is discussed with respect to the hydrology of the region

  8. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  9. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  10. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  11. Development of the electromagnetic tomography system. Sensitivity study of anomalous body by model studies; EM tomography system no kaihatsu. Model kaiseki ni yoru ijotai no kando chosa kekka

    Energy Technology Data Exchange (ETDEWEB)

    Kumekawa, Y; Miura, Y; Takasugi, S [Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Arai, E [Metal Mining Agency of Japan, Tokyo (Japan)

    1996-05-01

    An examination was made by a model analysis on sensitivity and the like against a resistive anomalous body, in connection with an electromagnetic tomography system with surface earthquake sources and underground receiver arrangements. A resistivity model was of a three-dimensional structure, and built with a 5 ohm{center_dot}m low resistivity anomalous body assembled in a 100 ohm{center_dot}m homogeneous medium. As a result of the examination, it was shown that the size limitation of an analyzable anomalous body was 50{times}50{times}20m at a frequency of 8 to 10kHz and that a system with high precision in a high frequency range was necessary. The examination of effects under a shallow anomalous body revealed, for example, that the fluctuation of a low frequency response was large compared with a deep anomalous body and that, where a second anomalous body existed under it, the effect also appeared with a surface earthquake source positioned in the opposite side from the anomalous body. The examination of effects under the three dimensional structure revealed, for example, that a remarkable change appeared in the data with the change in the inclined angle of the transmission line against the strike of the anomalous body. 4 refs., 7 figs.

  12. A sensitivity study on modeling black carbon in snow and its radiative forcing over the Arctic and Northern China

    International Nuclear Information System (INIS)

    Qian, Yun; Wang, Hailong; Rasch, Philip J; Zhang, Rudong; Flanner, Mark G

    2014-01-01

    Black carbon in snow (BCS) simulated in the Community Atmosphere Model (CAM5) is evaluated against measurements over Northern China and the Arctic, and its sensitivity to atmospheric deposition and two parameters that affect post-depositional enrichment is explored. Improvements in atmospheric BC transport and deposition significantly reduce the biases (by a factor of two) in the estimation of BCS concentration over both Northern China and the Arctic. Further sensitivity simulations using the improved CAM5 indicate that the melt-water scavenging efficiency (MSE) parameter plays an important role in regulating BC concentrations in the Arctic through the post-depositional enrichment, which not only drastically changes the amplitude but also shifts the seasonal cycle of the BCS concentration and its radiative forcing in the Arctic. The impact of the snow aging scaling factor (SAF) on BCS shows more complex latitudinal and seasonal dependence, and overall impact of SAF is much smaller than that of MSE. The improvements of BC transport and deposition in CAM5 have a stronger influence on BCS than perturbations of the two snow model parameters in Northern China. (letters)

  13. Exploring Intercultural Sensitivity in Early Adolescence: A Mixed Methods Study

    Science.gov (United States)

    Mellizo, Jennifer M.

    2017-01-01

    The purpose of this mixed methods study was to explore levels of intercultural sensitivity in a sample of fourth to eighth grade students in the United States (n = 162). "Intercultural sensitivity" was conceptualised through Bennett's Developmental Model of Sensitivity, and assessed through the Adapted Intercultural Sensitivity Index.…

  14. Oral sensitization to food proteins: A Brown Norway rat model

    NARCIS (Netherlands)

    Knippels, L.M.J.; Penninks, A.H.; Spanhaak, S.; Houben, G.F.

    1998-01-01

    Background: Although several in vivo antigenicity assays using parenteral immunization are operational, no adequate enteral sensitization models are available to study food allergy and allergenicity of food proteins. Objective: This paper describes the development of an enteral model for food

  15. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    Science.gov (United States)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more

  16. A comparison between the minimal model and the glucose clamp in the assessment of insulin sensitivity across the spectrum of glucose tolerance. Insulin Resistance Atherosclerosis Study.

    Science.gov (United States)

    Saad, M F; Anderson, R L; Laws, A; Watanabe, R M; Kades, W W; Chen, Y D; Sands, R E; Pei, D; Savage, P J; Bergman, R N

    1994-09-01

    An insulin-modified frequently sampled intravenous glucose tolerance test (FSIGTT) with minimal model analysis was compared with the glucose clamp in 11 subjects with normal glucose tolerance (NGT), 20 with impaired glucose tolerance (IGT), and 24 with non-insulin-dependent diabetes mellitus (NIDDM). The insulin sensitivity index (SI) was calculated from FSIGTT using 22- and 12-sample protocols (SI(22) and SI(12), respectively). Insulin sensitivity from the clamp was expressed as SI(clamp) and SIP(clamp). Minimal model parameters were similar when calculated with SI(22) and SI(12). SI could not be distinguished from 0 in approximately 50% of diabetic patients with either protocol. SI(22) correlated significantly with SI(clamp) in the whole group (r = 0.62), and in the NGT (r = 0.53), IGT (r = 0.48), and NIDDM (r = 0.41) groups (P SIP(clamp) were expressed in the same units, SI(22) was 66 +/- 5% (mean +/- SE) and 50 +/- 8% lower than SI(clamp) and SIP(clamp), respectively. Thus, minimal model analysis of the insulin-modified FSIGTT provides estimates of insulin sensitivity that correlate significantly with those from the glucose clamp. The correlation was weaker, however, in NIDDM. The insulin-modified FSIGTT can be used as a simple test for assessment of insulin sensitivity in population studies involving nondiabetic subjects. Additional studies are needed before using this test routinely in patients with NIDDM.

  17. The GFDL global atmosphere and land model AM4.0/LM4.0: 2. Model description, sensitivity studies, and tuning strategies

    Science.gov (United States)

    Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, Krista A.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, Paul C.D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.

    2018-01-01

    In Part 2 of this two‐part paper, documentation is provided of key aspects of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). The quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode has been provided in Part 1. Part 2 provides documentation of key components and some sensitivities to choices of model formulation and values of parameters, highlighting the convection parameterization and orographic gravity wave drag. The approach taken to tune the model's clouds to observations is a particular focal point. Care is taken to describe the extent to which aerosol effective forcing and Cess sensitivity have been tuned through the model development process, both of which are relevant to the ability of the model to simulate the evolution of temperatures over the last century when coupled to an ocean model.

  18. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 2. Model Description, Sensitivity Studies, and Tuning Strategies

    Science.gov (United States)

    Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, K.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, P. C. D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L. G.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.

    2018-03-01

    In Part 2 of this two-part paper, documentation is provided of key aspects of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). The quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode has been provided in Part 1. Part 2 provides documentation of key components and some sensitivities to choices of model formulation and values of parameters, highlighting the convection parameterization and orographic gravity wave drag. The approach taken to tune the model's clouds to observations is a particular focal point. Care is taken to describe the extent to which aerosol effective forcing and Cess sensitivity have been tuned through the model development process, both of which are relevant to the ability of the model to simulate the evolution of temperatures over the last century when coupled to an ocean model.

  19. Skin care products can aggravate epidermal function: studies in a murine model suggest a pathogenic role in sensitive skin

    OpenAIRE

    Li, Z; Hu, L; Elias, PM; Man, M-Q

    2018-01-01

    Sensitive skin is defined as a spectrum of unpleasant sensations in response to a variety of stimuli. However, only some skin care products provoke cutaneous symptoms in individuals with sensitive skin. Hence, it would be useful to identify products that could provoke cutaneous symptoms in individuals with sensitive skin.To assess whether vehicles, as well as certain branded skin care products, can alter epidermal function following topical applications to normal mouse skin.Following topical ...

  20. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    International Nuclear Information System (INIS)

    Hernandez Gonzalez, Francisco Alberto

    2016-01-01

    -situ effective thermal conductivity measurements of the pebble bed at room temperature by hot wire method. Steady state runs at 5 heating power levels encompassing the highest heat generation to be expected in the BU and relevant transient pulses have been performed. The 2D thermal map of the pebble bed at any power level has revealed a mostly symmetric distribution and no significant differences could be observed between the temperature read on the top and bottom surfaces at the interface layer between the pebble bed boundary and the test box of PREMUX. As a provision of modeling tools, two complementary approaches have been developed, aiming at giving a comprehensive modeling tool for prediction and validation purposes. The first is a deterministic, simplified thermo-mechanical model implemented in the commercial finite element code ANSYS. This model represents basic phenomena in the pebble beds, namely nonlinear elasticity, Drucker-Prager Cap plasticity, a non-associative flow rule and an isotropic hardening law. Preliminary validation of the model with the available literature on uniaxial compression tests comparing the axial compression stress against pebble bed strain at difference temperatures has shown a good agreement (root mean square errors <10%). The application of the model to PREMUX has shown a good general agreement as well with the temperature distribution dataset obtained during the experimental campaign with PREMUX. The predicted peak hydrostatic pressures are about ∝2.1 MPa and are located around the central heaters and thermocouples, while the maximum values for the bulk of the pebble bed are about 1.4 MPa. The second modeling approach is based on a probabilistic finite element method, which takes into account the inherent uncertainties of the model's input parameters and permits running a stochastic sensitivity analysis to obtain statistical information about the model outputs. This approach has been implemented to a thermal model of PREMUX developed with

  1. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Gonzalez, Francisco Alberto

    2016-10-14

    low intrusion has been confirmed by in-situ effective thermal conductivity measurements of the pebble bed at room temperature by hot wire method. Steady state runs at 5 heating power levels encompassing the highest heat generation to be expected in the BU and relevant transient pulses have been performed. The 2D thermal map of the pebble bed at any power level has revealed a mostly symmetric distribution and no significant differences could be observed between the temperature read on the top and bottom surfaces at the interface layer between the pebble bed boundary and the test box of PREMUX. As a provision of modeling tools, two complementary approaches have been developed, aiming at giving a comprehensive modeling tool for prediction and validation purposes. The first is a deterministic, simplified thermo-mechanical model implemented in the commercial finite element code ANSYS. This model represents basic phenomena in the pebble beds, namely nonlinear elasticity, Drucker-Prager Cap plasticity, a non-associative flow rule and an isotropic hardening law. Preliminary validation of the model with the available literature on uniaxial compression tests comparing the axial compression stress against pebble bed strain at difference temperatures has shown a good agreement (root mean square errors <10%). The application of the model to PREMUX has shown a good general agreement as well with the temperature distribution dataset obtained during the experimental campaign with PREMUX. The predicted peak hydrostatic pressures are about ∝2.1 MPa and are located around the central heaters and thermocouples, while the maximum values for the bulk of the pebble bed are about 1.4 MPa. The second modeling approach is based on a probabilistic finite element method, which takes into account the inherent uncertainties of the model's input parameters and permits running a stochastic sensitivity analysis to obtain statistical information about the model outputs. This approach has been

  2. Hydrological variability in the Fraser River Basin during the 20th century: A sensitivity study with the VIC model

    Science.gov (United States)

    Kang, D.; Gao, H.; Dery, S. J.

    2012-12-01

    The Variable Infiltration Capacity (VIC) model, a macroscale surface hydrology model, was applied to the Fraser River Basin (FRB) of British Columbia, Canada. Previous modeling studies have demonstrated that the FRB is a snow-dominated system but with climate change may evolve to a pluvial regime. The ultimate goal of this model application is to evaluate the changing contribution of snowmelt to streamflow in the FRB both spatially and temporally. To this end, the National Centers for Environmental Prediction (NCEP) reanalysis data combined with meteorological observations over 1953 to 2006 are used to drive the model at a resolution of 0.25°. Model simulations are first validated with daily discharge observations from the Water Survey of Canada (WSC). In addition, the snow water equivalent (SWE) results from VIC are compared with snow pillow observations from the B.C. Ministry of Environment. Then peak SWE values simulated each winter are compared with the annual runoff data to quantify the changing contribution of snowmelt to the hydrology of the FRB. With perturbed model forcings such as precipitation and air temperature, how streamflow and surface energy-mass balance are changed is evaluated. Finally, interactions between the land surface and ambient atmosphere are evaluated by analyzing VIC results such as evaporation, soil moisture, snowmelt and sensible-latent heat flux with corresponding meteorological forcings, i.e. precipitation and air temperature.

  3. Global sensitivity and uncertainty analysis of an atmospheric chemistry transport model: the FRAME model (version 9.15.0) as a case study

    Science.gov (United States)

    Aleksankina, Ksenia; Heal, Mathew R.; Dore, Anthony J.; Van Oijen, Marcel; Reis, Stefan

    2018-04-01

    Atmospheric chemistry transport models (ACTMs) are widely used to underpin policy decisions associated with the impact of potential changes in emissions on future pollutant concentrations and deposition. It is therefore essential to have a quantitative understanding of the uncertainty in model output arising from uncertainties in the input pollutant emissions. ACTMs incorporate complex and non-linear descriptions of chemical and physical processes which means that interactions and non-linearities in input-output relationships may not be revealed through the local one-at-a-time sensitivity analysis typically used. The aim of this work is to demonstrate a global sensitivity and uncertainty analysis approach for an ACTM, using as an example the FRAME model, which is extensively employed in the UK to generate source-receptor matrices for the UK Integrated Assessment Model and to estimate critical load exceedances. An optimised Latin hypercube sampling design was used to construct model runs within ±40 % variation range for the UK emissions of SO2, NOx, and NH3, from which regression coefficients for each input-output combination and each model grid ( > 10 000 across the UK) were calculated. Surface concentrations of SO2, NOx, and NH3 (and of deposition of S and N) were found to be predominantly sensitive to the emissions of the respective pollutant, while sensitivities of secondary species such as HNO3 and particulate SO42-, NO3-, and NH4+ to pollutant emissions were more complex and geographically variable. The uncertainties in model output variables were propagated from the uncertainty ranges reported by the UK National Atmospheric Emissions Inventory for the emissions of SO2, NOx, and NH3 (±4, ±10, and ±20 % respectively). The uncertainties in the surface concentrations of NH3 and NOx and the depositions of NHx and NOy were dominated by the uncertainties in emissions of NH3, and NOx respectively, whilst concentrations of SO2 and deposition of SOy were affected

  4. Modeling thermal structure, ice cover regime and sensitivity to climate change of two regulated lakes - a Norwegian case study

    Science.gov (United States)

    Gebre, Solomon; Boissy, Thibault; Alfredsen, Knut

    2013-04-01

    A great number of river and lakes in Norway and the Nordic region at large are regulated for water management such as hydropower production. Such regulations have the potential to alter the thermal and hydrological regimes in the lakes and rivers downstream impacting on river environment and ecology. Anticipated changes as a result of climate change in meteorological forcing data such as air temperature and precipitation cause changes in the water balance, water temperature and ice cover duration in the reservoirs. This may necessitate changes in operational rules as part of an adaptation strategy for the future. In this study, a one dimensional (1D) lake thermodynamic and ice cover model (MyLake) has been modified to take into account the effect of dynamic outflows in reservoirs and applied to two small but relatively deep regulated lakes (reservoirs) in Norway (Follsjøen and Tesse). The objective was to assess climate change impacts on the seasonal thermal characteristics, the withdrawal temperatures, and the reservoir ice cover dynamics with current operational regimes. The model solves the vertical energy balance on a daily time-step driven by meteorological and hydrological forcings: 2m air temperature, precipitation, 2m relative humidity, 10m wind speed, cloud cover, air pressure, solar insolation, inflow volume, inflow temperature and reservoir outflows. Model calibration with multi-seasonal data of temperature profiles showed that the model performed well in simulating the vertical water temperature profiles for the two study reservoirs. The withdrawal temperatures were also simulated reasonably well. The comparison between observed and simulated lake ice phenology (which were available only for one of the reservoirs - Tesse) was also reasonable taking into account the uncertainty in the observational data. After model testing and calibration, the model was then used to simulate expected changes in the future (2080s) due to climate change by considering

  5. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  6. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  7. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  8. Transport sensitivity studies for SITE-94: Time-dependent site-scale modelling of future glacial impact

    International Nuclear Information System (INIS)

    King-Clayton, L.; Smith, Paul

    1996-10-01

    The report details the methodology and preliminary results from the modelling of radionuclide transport from a hypothetical repository based at the Aespoe site in Sweden. The work complements and utilizes the results from regional-scale, variable density flow modelling by Provost, in which the groundwater flow field is time dependent, reflecting the impact of climate evolution over the next 130,000 years. The climate evolution include development of permafrost conditions and ice sheet advance and retreat. The results indicate that temporal changes in flow conditions owing to future climate changes have a significant effect on the transport of radionuclides from a repository. In all cases modelled with time-dependent boundary conditions, the greatest radionuclide fluxes occur towards the end of the main glacial periods, and correspond to periods of high groundwater discharge at the margin of the modelled ice sheets. Fluxes to the biosphere may, for limited periods (2000 years or less), be three times higher than those from the near field. The study provides a quantitative way of illustrating the possible effects of future glaciations on radionuclide transport from the repository. Such effects are likely to be significant in any potential siting area predicted to be affected by future periods of ice cover. 8 refs, 22 tabs, 119 figs

  9. The mobilisation model and parameter sensitivity

    International Nuclear Information System (INIS)

    Blok, B.M.

    1993-12-01

    In the PRObabillistic Safety Assessment (PROSA) of radioactive waste in a salt repository one of the nuclide release scenario's is the subrosion scenario. A new subrosion model SUBRECN has been developed. In this model the combined effect of a depth-dependent subrosion, glass dissolution, and salt rise has been taken into account. The subrosion model SUBRECN and the implementation of this model in the German computer program EMOS4 is presented. A new computer program PANTER is derived from EMOS4. PANTER models releases of radionuclides via subrosion from a disposal site in a salt pillar into the biosphere. For uncertainty and sensitivity analyses the new subrosion model Latin Hypercube Sampling has been used for determine the different values for the uncertain parameters. The influence of the uncertainty in the parameters on the dose calculations has been investigated by the following sensitivity techniques: Spearman Rank Correlation Coefficients, Partial Rank Correlation Coefficients, Standardised Rank Regression Coefficients, and the Smirnov Test. (orig./HP)

  10. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  11. ATLAS MDT neutron sensitivity measurement and modeling

    International Nuclear Information System (INIS)

    Ahlen, S.; Hu, G.; Osborne, D.; Schulz, A.; Shank, J.; Xu, Q.; Zhou, B.

    2003-01-01

    The sensitivity of the ATLAS precision muon detector element, the Monitored Drift Tube (MDT), to fast neutrons has been measured using a 5.5 MeV Van de Graaff accelerator. The major mechanism of neutron-induced signals in the drift tubes is the elastic collisions between the neutrons and the gas nuclei. The recoil nuclei lose kinetic energy in the gas and produce the signals. By measuring the ATLAS drift tube neutron-induced signal rate and the total neutron flux, the MDT neutron signal sensitivities were determined for different drift gas mixtures and for different neutron beam energies. We also developed a sophisticated simulation model to calculate the neutron-induced signal rate and signal spectrum for ATLAS MDT operation configurations. The calculations agree with the measurements very well. This model can be used to calculate the neutron sensitivities for different gaseous detectors and for neutron energies above those available to this experiment

  12. A sensitivity study of radiative-convective equilibrium in the tropics with a convection-resolving model

    Energy Technology Data Exchange (ETDEWEB)

    Xu, K.M.; Randall, D.A.

    1999-10-01

    Statistical-equilibrium (SE) states of radiative-convective systems in tropical oceanic conditions are simulated with a cloud ensemble model (CEM) in this study. Typical large-scale conditions from the Marshall Islands and the eastern tropical Atlantic regions are used to drive the CEM. The simulated SE precipitable water, column temperature, and relative humidity are only slightly higher than those of the observed mean states in both regions when time-invariant large-scale total advective cooling and moistening effects are imposed from observations. They are much higher than the observed if time-invariant observed large-scale ascent is imposed for the Marshall Islands region (i.e., ignoring horizontal advective effects). Compared with results from two similar studies, this SE state is somewhere between the cold/dry regime by Sui et al. and the warm/humid regime by Grabowski et al. Temporal variations of the imposed large-scale vertical motion that allows for subsidence make the SE state colder and drier. It remains about the same, however, if the magnitude of the imposed large-scale vertical motion is halved. The SE state is also colder and drier if solar radiation is absent. In general, all the SE states show that wet columns are thermally more stable (unstable) and dry columns are thermally more unstable (stable) in the lower (upper) troposphere. Column budget analyses are performed to explore the differences among the simulations performed in this study and among the different studies.

  13. The sensitivity of the ESA DELTA model

    Science.gov (United States)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  14. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  15. Stress sensitivity and resilience in the chronic mild stress rat model of depression; an in situ hybridization study

    DEFF Research Database (Denmark)

    Bergström, A; Jayatissa, M N; Mørk, A

    2008-01-01

    in stress. Moreover, in the CA3 we found downregulation of vascular endothelial growth factor (VEGF) mRNA in the CMS sensitive group. Downregulation of VEGF suggests impaired hippocampal function, caused by loss of trophic factor neuroprotective support, as part of a previously uncharacterized mechanism...... for development of anhedonia. CMS induced anhedonia was not related to mRNA expression differences of the dopamine receptors D(1) and D(2), enkephalin, dynorphin, the NMDA receptor subtype NR2B in the ventral striatum, BDNF expression in the dentate gyrus, nor corticotrophin releasing hormone (CRH) and arginine...

  16. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  17. Parameter Sensitivity Study of the Unreacted-Core Shrinking Model: A Computer Activity for Chemical Reaction Engineering Courses

    Science.gov (United States)

    Tudela, Ignacio; Bonete, Pedro; Fullana, Andres; Conesa, Juan Antonio

    2011-01-01

    The unreacted-core shrinking (UCS) model is employed to characterize fluid-particle reactions that are important in industry and research. An approach to understand the UCS model by numerical methods is presented, which helps the visualization of the influence of the variables that control the overall heterogeneous process. Use of this approach in…

  18. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  19. Sensitivity studies for supernovae type Ia

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Thien Tam; Goebel, Kathrin; Reifarth, Rene [Goethe University Frankfurt am Main (Germany); Calder, Alan [SUNY - Department of Physics and Astronomy, New York (United States); Pignatari, Marco [Konkoly Observatory of the Hungarian Academy of Sciences (Hungary); Townsley, Dean [The University of Alabama (United States); Travaglio, Claudia [INAF - Astrophysical Observatory, Turin (Italy); Collaboration: NuGrid collaboration

    2016-07-01

    The NuGrid research platform provides a simulation framework to study the nucleosynthesis in multi-dimensional Supernovae Type Ia models. We use a large network of over 5,000 isotopes and more than 60,000 reactions. The nucleosynthesis is investigated in post-processing simulations with temperature and density profiles, initial abundance distributions and a set of reaction rates as input. The sensitivity of the isotopic abundances to α-, proton-, and neutron-capture reaction, their inverse reactions, as well as fusion reactions were investigated. First results have been achieved for different mass coordinates of the exploding star.

  20. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  1. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  2. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  3. Physically-based slope stability modelling and parameter sensitivity: a case study in the Quitite and Papagaio catchments, Rio de Janeiro, Brazil

    Science.gov (United States)

    de Lima Neves Seefelder, Carolina; Mergili, Martin

    2016-04-01

    We use the software tools r.slope.stability and TRIGRS to produce factor of safety and slope failure susceptibility maps for the Quitite and Papagaio catchments, Rio de Janeiro, Brazil. The key objective of the work consists in exploring the sensitivity of the geotechnical (r.slope.stability) and geohydraulic (TRIGRS) parameterization on the model outcomes in order to define suitable parameterization strategies for future slope stability modelling. The two landslide-prone catchments Quitite and Papagaio together cover an area of 4.4 km², extending between 12 and 995 m a.s.l. The study area is dominated by granitic bedrock and soil depths of 1-3 m. Ranges of geotechnical and geohydraulic parameters are derived from literature values. A landslide inventory related to a rainfall event in 1996 (250 mm in 48 hours) is used for model evaluation. We attempt to identify those combinations of effective cohesion and effective internal friction angle yielding the best correspondence with the observed landslide release areas in terms of the area under the ROC Curve (AUCROC), and in terms of the fraction of the area affected by the release of landslides. Thereby we test multiple parameter combinations within defined ranges to derive the slope failure susceptibility (fraction of tested parameter combinations yielding a factor of safety smaller than 1). We use the tool r.slope.stability (comparing the infinite slope stability model and an ellipsoid-based sliding surface model) to test and to optimize the geotechnical parameters, and TRIGRS (a coupled hydraulic-infinite slope stability model) to explore the sensitivity of the model results to the geohydraulic parameters. The model performance in terms of AUCROC is insensitive to the variation of the geotechnical parameterization within much of the tested ranges. Assuming fully saturated soils, r.slope.stability produces rather conservative predictions, whereby the results yielded with the sliding surface model are more

  4. Urban Surface Temperature Reduction via the Urban Aerosol Direct Effect: A Remote Sensing and WRF Model Sensitivity Study

    Directory of Open Access Journals (Sweden)

    Menglin Jin

    2010-01-01

    Full Text Available The aerosol direct effect, namely, scattering and absorption of sunlight in the atmosphere, can lower surface temperature by reducing surface insolation. By combining National Aeronautics and Space Administration (NASA AERONET (AErosol RObotic NETwork observations in large cities with Weather Research and Forecasting (WRF model simulations, we find that the aerosol direct reduction of surface insolation ranges from 40–100Wm−2, depending on aerosol loading and land-atmosphere conditions. To elucidate the maximum possible effect, values are calculated using a radiative transfer model based on the top quartile of the multiyear instantaneous aerosol data observed by AERONET sites. As a result, surface skin temperature can be reduced by 1°C-2°C while 2-m surface air temperature reductions are generally on the order of 0.5°C–1°C.

  5. Sensitivity of system stability to model structure

    Science.gov (United States)

    Hosack, G.R.; Li, H.W.; Rossignol, P.A.

    2009-01-01

    A community is stable, and resilient, if the levels of all community variables can return to the original steady state following a perturbation. The stability properties of a community depend on its structure, which is the network of direct effects (interactions) among the variables within the community. These direct effects form feedback cycles (loops) that determine community stability. Although feedback cycles have an intuitive interpretation, identifying how they form the feedback properties of a particular community can be intractable. Furthermore, determining the role that any specific direct effect plays in the stability of a system is even more daunting. Such information, however, would identify important direct effects for targeted experimental and management manipulation even in complex communities for which quantitative information is lacking. We therefore provide a method that determines the sensitivity of community stability to model structure, and identifies the relative role of particular direct effects, indirect effects, and feedback cycles in determining stability. Structural sensitivities summarize the degree to which each direct effect contributes to stabilizing feedback or destabilizing feedback or both. Structural sensitivities prove useful in identifying ecologically important feedback cycles within the community structure and for detecting direct effects that have strong, or weak, influences on community stability. The approach may guide the development of management intervention and research design. We demonstrate its value with two theoretical models and two empirical examples of different levels of complexity. ?? 2009 Elsevier B.V. All rights reserved.

  6. SSI sensitivity studies and model improvements for the US NRC Seismic Safety Margins Research Program. Rev. 1

    International Nuclear Information System (INIS)

    Johnson, J.J.; Maslenikov, O.R.; Benda, B.J.

    1984-10-01

    The Seismic Safety Margins Research Program (SSMRP) is a US NRC-funded program conducted by Lawrence Livermore National Laboratory. Its goal is to develop a complete fully coupled analysis procedure for estimating the risk of an earthquake-induced radioactive release from a commercial nuclear power plant. In Phase II of the SSMRP, the methodology was applied to the Zion nuclear power plant. Three topics in the SSI analysis of Zion were investigated and reported here - flexible foundation modeling, structure-to-structure interaction, and basemat uplift. The results of these investigations were incorporated in the SSMRP seismic risk analysis. 14 references, 51 figures, 13 tables

  7. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  8. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  9. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  10. The effects of aerosols on precipitation and dimensions of subtropical clouds: a sensitivity study using a numerical cloud model

    Directory of Open Access Journals (Sweden)

    A. Teller

    2006-01-01

    Full Text Available Numerical experiments were carried out using the Tel-Aviv University 2-D cloud model to investigate the effects of increased concentrations of Cloud Condensation Nuclei (CCN, giant CCN (GCCN and Ice Nuclei (IN on the development of precipitation and cloud structure in mixed-phase sub-tropical convective clouds. In order to differentiate between the contribution of the aerosols and the meteorology, all simulations were conducted with the same meteorological conditions. The results show that under the same meteorological conditions, polluted clouds (with high CCN concentrations produce less precipitation than clean clouds (with low CCN concentrations, the initiation of precipitation is delayed and the lifetimes of the clouds are longer. GCCN enhance the total precipitation on the ground in polluted clouds but they have no noticeable effect on cleaner clouds. The increased rainfall due to GCCN is mainly a result of the increased graupel mass in the cloud, but it only partially offsets the decrease in rainfall due to pollution (increased CCN. The addition of more effective IN, such as mineral dust particles, reduces the total amount of precipitation on the ground. This reduction is more pronounced in clean clouds than in polluted ones. Polluted clouds reach higher altitudes and are wider than clean clouds and both produce wider clouds (anvils when more IN are introduced. Since under the same vertical sounding the polluted clouds produce less rain, more water vapor is left aloft after the rain stops. In our simulations about 3.5 times more water evaporates after the rain stops from the polluted cloud as compared to the clean cloud. The implication is that much more water vapor is transported from lower levels to the mid troposphere under polluted conditions, something that should be considered in climate models.

  11. A Study of Nonlinear Elasticity Effects on Permeability of Stress Sensitive Shale Rocks Using an Improved Coupled Flow and Geomechanics Model: A Case Study of the Longmaxi Shale in China

    Directory of Open Access Journals (Sweden)

    Chenji Wei

    2018-02-01

    Full Text Available Gas transport in shale gas reservoirs is largely affected by rock properties such as permeability. These properties are often sensitive to the in-situ stress state changes. Accurate modeling of shale gas transport in shale reservoir rocks considering the stress sensitive effects on rock petrophysical properties is important for successful shale gas extraction. Nonlinear elasticity in stress sensitive reservoir rocks depicts the nonlinear stress-strain relationship, yet it is not thoroughly studied in previous reservoir modeling works. In this study, an improved coupled flow and geomechanics model that considers nonlinear elasticity is proposed. The model is based on finite element methods, and the nonlinear elasticity in the model is validated with experimental data on shale samples selected from the Longmaxi Formation in Sichuan Basin China. Numerical results indicate that, in stress sensitive shale rocks, nonlinear elasticity affects shale permeability, shale porosity, and distributions of effective stress and pore pressure. Elastic modulus change is dependent on not only in-situ stress state but also stress history path. Without considering nonlinear elasticity, the modeling of shale rock permeability in Longmaxi Formation can overestimate permeability values by 1.6 to 53 times.

  12. Application of Weather Research and Forecasting Model with Chemistry (WRF/Chem) over northern China: Sensitivity study, comparative evaluation, and policy implications

    Science.gov (United States)

    Wang, Litao; Zhang, Yang; Wang, Kai; Zheng, Bo; Zhang, Qiang; Wei, Wei

    2016-01-01

    An extremely severe and persistent haze event occurred over the middle and eastern China in January 2013, with the record-breaking high concentrations of fine particulate matter (PM2.5). In this study, an online-coupled meteorology-air quality model, the Weather Research and Forecasting Model with Chemistry (WRF/Chem), is applied to simulate this pollution episode over East Asia and northern China at 36- and 12-km grid resolutions. A number of simulations are conducted to examine the sensitivities of the model predictions to various physical schemes. The results show that all simulations give similar predictions for temperature, wind speed, wind direction, and humidity, but large variations exist in the prediction for precipitation. The concentrations of PM2.5, particulate matter with aerodynamic diameter of 10 μm or less (PM10), sulfur dioxide (SO2), and nitrogen dioxide (NO2) are overpredicted partially due to the lack of wet scavenging by the chemistry-aerosol option with the 1999 version of the Statewide Air Pollution Research Center (SAPRC-99) mechanism with the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and the Volatility Basis Set (VBS) for secondary organic aerosol formation. The optimal set of configurations with the best performance is the simulation with the Gorddard shortwave and RRTM longwave radiation schemes, the Purdue Lin microphysics scheme, the Kain-Fritsch cumulus scheme, and a nudging coefficient of 1 × 10-5 for water vapor mixing ratio. The emission sensitivity simulations show that the PM2.5 concentrations are most sensitive to nitrogen oxide (NOx) and SO2 emissions in northern China, but to NOx and ammonia (NH3) emissions in southern China. 30% NOx emission reductions may result in an increase in PM2.5 concentrations in northern China because of the NH3-rich and volatile organic compound (VOC) limited conditions over this area. VOC emission reductions will lead to a decrease in PM2.5 concentrations in eastern China

  13. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  14. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  15. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  16. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  17. A sensitivity study to global desertification in cold and warm climates: results from the IPSL OAGCM model

    Energy Technology Data Exchange (ETDEWEB)

    Alkama, Ramdane [GAME/CNRM, CNRS/Meteo-France, Toulouse (France); Kageyama, Masa; Ramstein, Gilles [LSCE/IPSL UMR CEA-CNRS-UVSQ 8212, Gif sur Yvette (France)

    2012-04-15

    Many simulations have been devoted to study the impact of global desertification on climate, but very few have quantified this impact in very different climate contexts. Here, the climatic impacts of large-scale global desertification in warm (2100 under the SRES A2 scenario forcing), modern and cold (Last Glacial Maximum, 21 thousand years ago) climates are assessed by using the IPSL OAGCM. For each climate, two simulations have been performed, one in which the continents are covered by modern vegetation, the other in which global vegetation is changed to desert i.e. bare soil. The comparison between desert and present vegetation worlds reveals that the prevailing signal in terms of surface energy budget is dominated by the reduction of upward latent heat transfer. Replacing the vegetation by bare soil has similar impacts on surface air temperature South of 20 N in all three climatic contexts, with a warming over tropical forests and a slight cooling over semi-arid and arid areas, and these temperature changes are of the same order of magnitude. North of 20 N, the difference between the temperatures simulated with present day vegetation and in a desert world is mainly due to the change in net radiation related to the modulation of the snow albedo by vegetation, which is obviously absent in the desert world simulations. The enhanced albedo in the desert world simulations induces a large temperature decrease, especially during summer in the cold and modern climatic contexts, whereas the largest difference occurs during winter in the warm climate. This temperature difference requires a larger heat transport to the northern high latitudes. Part of this heat transport increase is achieved through an intensification of the Atlantic Meridional Overturning Circulation. This intensification reduces the sea-ice extent and causes a warming over the North Atlantic and Arctic oceans in the warm climate context. In contrast, the large cooling North of 20 N in both the modern

  18. A multi-factor model of panic disorder: results of a preliminary study integrating the role of perfectionism, stress, physiological anxiety and anxiety sensitivity

    Directory of Open Access Journals (Sweden)

    Cristina M. Wood

    2015-05-01

    Full Text Available Background: Panic disorder (PD is a highly prevalent and disabling mental health problem associated with different factors including perfectionism, stress, physiological anxiety, and anxiety sensitivity regarding physical concerns; however, no studies have analyzed the joint relationship between these factors and PD in a multi-factor model using structural equation modeling. Method: A cross-sectional study was carried out to collect data on these factors and self-reported DSM-IV past-year PD symptoms in a large sample of the general population (N=936. Results: Perceived stress had a significant effect in increasing physiological anxiety, which in turn had an important association with physical concerns. Perfectionism and perceived stress had an indirect relation with past year PD via the mediator role of physiological anxiety and physical concerns. Physical concerns, on one hand, seemed to mediate the impact between perfectionism and PD and, on the other, partially mediated the role between physiological anxiety and PD. Conclusions: Although there is considerable evidence on the association between each of these factors and PD, this model can be considered a broader and productive framework of research on the nature and treatment of PD.

  19. Human primary erythroid cells as a more sensitive alternative in vitro hematological model for nanotoxicity studies: Toxicological effects of silver nanoparticles.

    Science.gov (United States)

    Rujanapun, Narawadee; Aueviriyavit, Sasitorn; Boonrungsiman, Suwimon; Rosena, Apiwan; Phummiratch, Duangkamol; Riolueang, Suchada; Chalaow, Nipon; Viprakasit, Vip; Maniratanachote, Rawiwan

    2015-12-01

    Although immortalized cells established from cancerous cells have been widely used for studies in nanotoxicology studies, the reliability of the results derived from immortalized cells has been questioned because of their different characteristics from normal cells. In the present study, human primary erythroid cells in liquid culture were used as an in vitro hematological cell model for investigation of the nanotoxicity of silver nanoparticles (AgNPs) and comparing the results to the immortalized hematological cell lines HL60 and K562. The AgNPs caused significant cytotoxic effects in the primary erythroid cells, as shown by the decreased cell viability and induction of intracellular ROS generation and apoptosis, whereas they showed much lower cytotoxic and apoptotic effects in HL60 and K562 cells and did not induced ROS generation in these cell lines. Scanning electron microcopy revealed an interaction of AgNPs to the cell membrane in both primary erythroid and immortalized cells. In addition, AgNPs induced hemolysis in the primary erythroid cells in a dose-dependent manner, and transmission electron microcopy analysis revealed that AgNPs damaged the erythroid cell membrane. Taken together, these results suggest that human primary erythroid cells in liquid culture are a more sensitive alternative in vitro hematological model for nanotoxicology studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Precipitates/Salts Model Sensitivity Calculation

    Energy Technology Data Exchange (ETDEWEB)

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  1. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  2. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  4. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  5. RETRAN sensitivity studies of light water reactor transients. Final report

    International Nuclear Information System (INIS)

    Burrell, N.S.; Gose, G.C.; Harrison, J.F.; Sawtelle, G.R.

    1977-06-01

    This report presents the results of sensitivity studies performed using the RETRAN/RELAP4 transient analysis code to identify critical parameters and models which influence light water reactor transient predictions. Various plant transients for both boiling water reactors and pressurized water reactors are examined. These studies represent the first detailed evaluation of the RETRAN/RELAP4 transient code capability in predicting a variety of plant transient responses. The wide range of transients analyzed in conjunction with the parameter and modeling studies performed identify several sensitive areas as well as areas requiring future study and model development

  6. Surface-Sensitive and Bulk Studies on the Complexation and Photosensitized Degradation of Catechol by Iron(III) as a Model for Multicomponent Aerosol Systems

    Science.gov (United States)

    Al-abadleh, H. A.; Tofan-Lazar, J.; Situm, A.; Ruffolo, J.; Slikboer, S.

    2013-12-01

    Surface water plays a crucial role in facilitating or inhibiting surface reactions in atmospheric aerosols. Little is known about the role of surface water in the complexation of organic molecules to transition metals in multicomponent aerosol systems. We will show results from real time diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) experiments for the in situ complexation of catechol to Fe(III) and its photosensitized degradation under dry and humid conditions. Catechol was chosen as a simple model for humic-like substances (HULIS) in aerosols and aged polyaromatic hydrocarbons (PAH). It has also been detected in secondary organic aerosols (SOA) formed from the reaction of hydroxyl radicals with benzene. Given the importance of the iron content in aerosols and its biogeochemistry, our studies were conducted using FeCl3. For comparison, these surface-sensitive studies were complemented with bulk aqueous ATR-FTIR, UV-vis, and HPLC measurements for structural, quantitative and qualitative information about complexes in the bulk, and potential degradation products. The implications of our studies on understanding interfacial and condensed phase chemistry relevant to multicomponent aerosols, water thin islands on buildings, and ocean surfaces containing transition metals will be discussed.

  7. Sensitivity of mesoscale modeling of smoke direct radiative effect to the emission inventory: a case study in northern sub-Saharan African region

    International Nuclear Information System (INIS)

    Zhang, Feng; Wang, Jun; Yang, Zhifeng; Ge, Cui; Ichoku, Charles; Hyer, Edward J; Da Silva, Arlindo; Su, Shenjian; Zhang, Xiaoyang; Kondragunta, Shobha; Kaiser, Johannes W; Wiedinmyer, Christine

    2014-01-01

    An ensemble approach is used to examine the sensitivity of smoke loading and smoke direct radiative effect in the atmosphere to uncertainties in smoke emission estimates. Seven different fire emission inventories are applied independently to WRF-Chem model (v3.5) with the same model configuration (excluding dust and other emission sources) over the northern sub-Saharan African (NSSA) biomass-burning region. Results for November and February 2010 are analyzed, respectively representing the start and end of the biomass burning season in the study region. For February 2010, estimates of total smoke emission vary by a factor of 12, but only differences by factors of 7 or less are found in the simulated regional (15°W–42°E, 13°S–17°N) and monthly averages of column PM 2.5 loading, surface PM 2.5 concentration, aerosol optical depth (AOD), smoke radiative forcing at the top-of-atmosphere and at the surface, and air temperature at 2 m and at 700 hPa. The smaller differences in these simulated variables may reflect the atmospheric diffusion and deposition effects to dampen the large difference in smoke emissions that are highly concentrated in areas much smaller than the regional domain of the study. Indeed, at the local scale, large differences (up to a factor of 33) persist in simulated smoke-related variables and radiative effects including semi-direct effect. Similar results are also found for November 2010, despite differences in meteorology and fire activity. Hence, biomass burning emission uncertainties have a large influence on the reliability of model simulations of atmospheric aerosol loading, transport, and radiative impacts, and this influence is largest at local and hourly-to-daily scales. Accurate quantification of smoke effects on regional climate and air quality requires further reduction of emission uncertainties, particularly for regions of high fire concentrations such as NSSA. (paper)

  8. Is Convection Sensitive to Model Vertical Resolution and Why?

    Science.gov (United States)

    Xie, S.; Lin, W.; Zhang, G. J.

    2017-12-01

    Model sensitivity to horizontal resolutions has been studied extensively, whereas model sensitivity to vertical resolution is much less explored. In this study, we use the US Department of Energy (DOE)'s Accelerated Climate Modeling for Energy (ACME) atmosphere model to examine the sensitivity of clouds and precipitation to the increase of vertical resolution of the model. We attempt to understand what results in the behavior change (if any) of convective processes represented by the unified shallow and turbulent scheme named CLUBB (Cloud Layers Unified by Binormals) and the Zhang-McFarlane deep convection scheme in ACME. A short-term hindcast approach is used to isolate parameterization issues from the large-scale circulation. The analysis emphasizes on how the change of vertical resolution could affect precipitation partitioning between convective- and grid-scale as well as the vertical profiles of convection-related quantities such as temperature, humidity, clouds, convective heating and drying, and entrainment and detrainment. The goal is to provide physical insight into potential issues with model convective processes associated with the increase of model vertical resolution. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  10. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  11. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya

    2017-10-03

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  12. INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.

    KAUST Repository

    Elkantassi, Soumaya; Kalligiannaki, Evangelia; Tempone, Raul

    2017-01-01

    Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.

  13. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  14. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  15. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  16. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  17. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  18. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  19. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    2008-02-01

    Full Text Available Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity.Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity.When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse.Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  20. Effect of Temperature-Sensitive Poloxamer Solution/Gel Material on Pericardial Adhesion Prevention: Supine Rabbit Model Study Mimicking Cardiac Surgery.

    Directory of Open Access Journals (Sweden)

    Hyun Kang

    Full Text Available We investigated the mobility of a temperature-sensitive poloxamer/Alginate/CaCl2 mixture (PACM in relation to gravity and cardiac motion and the efficacy of PACM on the prevention of pericardial adhesion in a supine rabbit model.A total of 50 rabbits were randomly divided into two groups according to materials applied after epicardial abrasion: PACM and dye mixture (group PD; n = 25 and saline as the control group (group CO; n = 25. In group PD, rabbits were maintained in a supine position with appropriate sedation, and location of mixture of PACM and dye was assessed by CT scan at the immediate postoperative period and 12 hours after surgery. The grade of adhesions was evaluated macroscopically and microscopically two weeks after surgery.In group PD, enhancement was localized in the anterior pericardial space, where PACM and dye mixture was applied, on immediate post-surgical CT scans. However, the volume of the enhancement was significantly decreased at the anterior pericardial space 12 hours later (P < .001. Two weeks after surgery, group PD had significantly lower macroscopic adhesion score (P = .002 and fibrosis score (P = .018 than did group CO. Inflammation score and expression of anti-macrophage antibody in group PD were lower than those in group CO, although the differences were not significant.In a supine rabbit model study, the anti-adhesion effect was maintained at the area of PACM application, although PACM shifted with gravity and heart motion. For more potent pericardial adhesion prevention, further research and development on the maintenance of anti-adhesion material position are required.

  1. Development of the EM tomography system. Part 2. Sensitivity studies of anomalous body by model studies; EM tomography system no kaihatsu. 2. Model kaiseki ni yoru ijotai no kando chosa kekka

    Energy Technology Data Exchange (ETDEWEB)

    Kumekawa, Y; Miura, Y; Takasugi, S [GERD Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Arai, E [Metal Mining Agency of Japan, Tokyo (Japan)

    1997-05-27

    A model analysis was used to investigate sensitivity of a two-dimensional structure on a resistivity anomalous body by using an electromagnetic tomography system. The resistivity model handled a three-dimensional structure. The model was prepared as a pseudo two-dimensional model in which a low resistivity anomalous body with 1 ohm-m was incorporated that has a basic length of 1000 m in the Y-direction in a homogenous medium having 100 ohm-m. As a result of the analysis, the following matters were elucidated: if a low resistivity anomalous body is present in a shallow subsurface, its impact starts appearing from lower frequencies than when the anomalous body exists only at a greater depth; if a high resistivity anomalous body exists, the detection sensitivity is lower than for the low resistivity anomalous body, but the analysis would be possible by using the phase because the phase has made a greater change; the source TxZ shows a change from lower frequencies than for the source TxX, and the amount of change is greater, hence the detection sensitivity on an anomalous body may be said higher with the source TxZ; however, for the anomalous body in shallow subsurface, the source TxX is more effective since it is not subjected to a too great impact at a greater depth. 5 refs., 7 figs.

  2. Association between Gene Polymorphisms and Pain Sensitivity Assessed in a Multi-Modal Multi-Tissue Human Experimental Model - An Explorative Study

    DEFF Research Database (Denmark)

    Nielsen, Lecia Møller; Olesen, Anne Estrup; Sato, Hiroe

    2016-01-01

    The genetic influence on sensitivity to noxious stimuli (pain sensitivity) remains controversial and needs further investigation. In the present study, the possible influence of polymorphisms in three opioid receptor (OPRM, OPRD and OPRK) genes and the catechol-O-methyltransferase (COMT) gene...... on pain sensitivity in healthy participants was investigated. Catechol-O-methyltransferase has an indirect effect on the mu opioid receptor by changing its activity through an altered endogenous ligand effect. Blood samples for genetic analysis were withdrawn in a multi-modal and multi-tissue experimental......, electrical and thermal visceral stimulations. A cold pressor test was also conducted. DNA was available from 38 of 40 participants. Compared to non-carriers of the COMT rs4680A allele, carriers reported higher bone pressure pain tolerance threshold (i.e. less pain) by up to 23.8% (p

  3. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Sensitivity and uncertainty studies of the CRAC2 computer code

    International Nuclear Information System (INIS)

    Kocher, D.C.; Ward, R.C.; Killough, G.G.; Dunning, D.E. Jr.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.

    1985-05-01

    This report presents a study of the sensitivity of early fatalities, early injuries, latent cancer fatalities, and economic costs for hypothetical nuclear reactor accidents as predicted by the CRAC2 computer code (CRAC = Calculation of Reactor Accident Consequences) to uncertainties in selected models and parameters used in the code. The sources of uncertainty that were investigated in the CRAC2 sensitivity studies include (1) the model for plume rise, (2) the model for wet deposition, (3) the procedure for meteorological bin-sampling involving the selection of weather sequences that contain rain, (4) the dose conversion factors for inhalation as they are affected by uncertainties in the physical and chemical form of the released radionuclides, (5) the weathering half-time for external ground-surface exposure, and (6) the transfer coefficients for estimating exposures via terrestrial foodchain pathways. The sensitivity studies were performed for selected radionuclide releases, hourly meteorological data, land-use data, a fixed non-uniform population distribution, a single evacuation model, and various release heights and sensible heat rates. Two important general conclusions from the sensitivity and uncertainty studies are as follows: (1) The large effects on predicted early fatalities and early injuries that were observed in some of the sensitivity studies apparently are due in part to the presence of thresholds in the dose-response models. Thus, the observed sensitivities depend in part on the magnitude of the radionuclide releases. (2) Some of the effects on predicted early fatalities and early injuries that were observed in the sensitivity studies were comparable to effects that were due only to the selection of different sets of weather sequences in bin-sampling runs. 47 figs., 50 tabs

  5. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  6. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  7. Regional climate model sensitivity to domain size

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)

    2009-05-15

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)

  8. The Sensitivity of Heavy Precipitation to Horizontal Resolution, Domain Size, and Rain Rate Assimilation: Case Studies with a Convection-Permitting Model

    Directory of Open Access Journals (Sweden)

    Xingbao Wang

    2016-01-01

    Full Text Available The Australian Community Climate and Earth-System Simulator (ACCESS is used to test the sensitivity of heavy precipitation to various model configurations: horizontal resolution, domain size, rain rate assimilation, perturbed physics, and initial condition uncertainties, through a series of convection-permitting simulations of three heavy precipitation (greater than 200 mm day−1 cases in different synoptic backgrounds. The larger disparity of intensity histograms and rainfall fluctuation caused by different model configurations from their mean and/or control run indicates that heavier precipitation forecasts have larger uncertainty. A cross-verification exercise is used to quantify the impacts of different model parameters on heavy precipitation. The dispersion of skill scores with control run used as “truth” shows that the impacts of the model resolution and domain size on the quantitative precipitation forecast are not less than those of perturbed physics and initial field uncertainties in these not intentionally selected heavy precipitation cases. The result indicates that model resolution and domain size should be considered as part of probabilistic precipitation forecasts and ensemble prediction system design besides the model initial field uncertainty.

  9. Sensitivity and uncertainty studies of the CRAC2 computer code

    International Nuclear Information System (INIS)

    Kocher, D.C.; Ward, R.C.; Killough, G.G.; Dunning, D.E. Jr.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.

    1987-01-01

    The authors have studied the sensitivity of health impacts from nuclear reactor accidents, as predicted by the CRAC2 computer code, to the following sources of uncertainty: (1) the model for plume rise, (2) the model for wet deposition, (3) the meteorological bin-sampling procedure for selecting weather sequences with rain, (4) the dose conversion factors for inhalation as affected by uncertainties in the particle size of the carrier aerosol and the clearance rates of radionuclides from the respiratory tract, (5) the weathering half-time for external ground-surface exposure, and (6) the transfer coefficients for terrestrial foodchain pathways. Predicted health impacts usually showed little sensitivity to use of an alternative plume-rise model or a modified rain-bin structure in bin-sampling. Health impacts often were quite sensitive to use of an alternative wet-deposition model in single-trial runs with rain during plume passage, but were less sensitive to the model in bin-sampling runs. Uncertainties in the inhalation dose conversion factors had important effects on early injuries in single-trial runs. Latent cancer fatalities were moderately sensitive to uncertainties in the weathering half-time for ground-surface exposures, but showed little sensitivity to the transfer coefficients for terrestrial foodchain pathways. Sensitivities of CRAC2 predictions to uncertainties in the models and parameters also depended on the magnitude of the source term, and some of the effects on early health effects were comparable to those that were due only to selection of different sets of weather sequences in bin-sampling

  10. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  11. Regional climate model sensitivity to domain size

    Science.gov (United States)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  12. The Sensitivity of State Differential Game Vessel Traffic Model

    Directory of Open Access Journals (Sweden)

    Lisowski Józef

    2016-04-01

    Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.

  13. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  14. Modelling survival: exposure pattern, species sensitivity and uncertainty.

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G

    2016-07-06

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  15. Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling

    National Research Council Canada - National Science Library

    Sakhanenko, Nikita A; Luger, George F

    2008-01-01

    We propose a context-sensitive probabilistic modeling system (COSMOS) that reasons about a complex, dynamic environment through a series of applications of smaller, knowledge-focused models representing contextually relevant information...

  16. ASS and SULT2A1 are Novel and Sensitive Biomarkers of Acute Hepatic Injury-A Comparative Study in Animal Models.

    Science.gov (United States)

    Prima, Victor; Cao, Mengde; Svetlov, Stanislav I

    2013-01-10

    Liver and kidney damage associated with polytrauma, endotoxic shock/sepsis, and organ transplantation, are among the leading causes of the multiple organ failure. Development of novel sensitive biomarkers that detect early stages of liver and kidney injury is vital for the effective diagnostics and treatment of these life-threatening conditions. Previously, we identified several hepatic proteins, including Argininosuccinate Synthase (ASS) and sulfotransferases which were degraded in the liver and rapidly released into circulation during Ischemia/Reperfusion (I/R) injury. Here we compared sensitivity and specificity of the newly developed sandwich ELISA assays for ASS and the sulfotransferase isoform SULT2A1 with the standard clinical liver and kidney tests Alanine Aminotransferase (ALT) and Aspartate Transaminase (AST) in various pre-clinical models of acute injury. Our data suggest that ASS and SULT2A1 have superior characteristics for liver and kidney health assessment in endotoxemia, Ischemia/Reperfusion (I/R), chemical and drug-induced liver injury and may be of high potential value for clinical applications.

  17. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  18. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  19. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  20. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  1. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  2. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  3. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  4. Sensitivity studies on TIME2 Version 1.0

    International Nuclear Information System (INIS)

    1988-03-01

    The results of a sensitivity analysis of Version 1.0 of the TIME2 computer code to certain aspects of the input data set are presented. Parameters evaluated were: river dimensions, the density and grain size of sediment carried by the river, human intrusion data, sea level rise rate, erosion factors and meander modelling data. The sensitivity of the code to variation of single value parameters was evaluated by means of graphical comparisons. For parameters specified as probability density functions (pdf's), the Kolmogorov-Smirnov test was used. The study assists in the specification of data for TIME2 by identifying parameters to which the models used are particularly sensitive and also suggests that some input currently specified as pdf's could be replaced with single values without affecting the quality of the results obtained. (author)

  5. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  6. SOX - Towards the detection of sterile neutrinos in Borexino. Beta spectrum modeling, Monte Carlo development and sensitivity studies for the sterile neutrino search in Borexino

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Mikko

    2016-12-15

    Several experiments have reported anomalies in the neutrino sector which might be explained by the existence of a fourth (sterile) neutrino with a squared mass difference of about 1 eV{sup 2} to the other three active neutrinos. The SOX project is part of the experimental program of the Borexino experiment and seeks for a clarification of the observed anomalies. For that purpose an artificial antineutrino source ({sup 144}Ce-{sup 144}Pr) and possibly neutrino source ({sup 51}Cr) will be deployed underneath the large low background detector Borexino. The detector provides both energy and vertex resolution to observe a possible oscillation signature within the detector volume. The calculation of the antineutrino spectrum is based on existing theoretical models and was performed within this thesis. The modeling includes several sub-leading corrections particularly such as finite size of the nucleus, screening of the atomic electrons and radiative effects. Related to this work, dedicated Monte Carlo generators have been developed to simulate the inverse beta decay reaction and the (anti)neutrino elastic scattering off electrons. Based on a profile likelihood analysis, the sensitivity to the sterile neutrino search of the SOX project was evaluated. The results obtained from this analysis confirm that the currently allowed parameter regions for sterile neutrinos can be tested at 95% confidence level. Finally, an alternative concept for the sterile neutrino search is presented which is based on a cyclotron and a Beryllium target near Borexino (Borexino+IsoDAR).

  7. SOX - Towards the detection of sterile neutrinos in Borexino. Beta spectrum modeling, Monte Carlo development and sensitivity studies for the sterile neutrino search in Borexino

    International Nuclear Information System (INIS)

    Meyer, Mikko

    2016-12-01

    Several experiments have reported anomalies in the neutrino sector which might be explained by the existence of a fourth (sterile) neutrino with a squared mass difference of about 1 eV"2 to the other three active neutrinos. The SOX project is part of the experimental program of the Borexino experiment and seeks for a clarification of the observed anomalies. For that purpose an artificial antineutrino source ("1"4"4Ce-"1"4"4Pr) and possibly neutrino source ("5"1Cr) will be deployed underneath the large low background detector Borexino. The detector provides both energy and vertex resolution to observe a possible oscillation signature within the detector volume. The calculation of the antineutrino spectrum is based on existing theoretical models and was performed within this thesis. The modeling includes several sub-leading corrections particularly such as finite size of the nucleus, screening of the atomic electrons and radiative effects. Related to this work, dedicated Monte Carlo generators have been developed to simulate the inverse beta decay reaction and the (anti)neutrino elastic scattering off electrons. Based on a profile likelihood analysis, the sensitivity to the sterile neutrino search of the SOX project was evaluated. The results obtained from this analysis confirm that the currently allowed parameter regions for sterile neutrinos can be tested at 95% confidence level. Finally, an alternative concept for the sterile neutrino search is presented which is based on a cyclotron and a Beryllium target near Borexino (Borexino+IsoDAR).

  8. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  9. Healthy volunteers can be phenotyped using cutaneous sensitization pain models

    DEFF Research Database (Denmark)

    Werner, Mads U; Petersen, Karin; Rowbotham, Michael C

    2013-01-01

    Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repe...... repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models.......Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following...

  10. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  11. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  12. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  13. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  14. Sensitivity analysis with regard to variations of physical forcing including two possible future hydrographic regimes for the Oeregrundsgrepen. A follow-up baroclinic 3D-model study

    International Nuclear Information System (INIS)

    Engqvist, A.; Andrejev, O.

    2000-02-01

    A sensitivity analysis with regard to variations of physical forcing has been performed using a 3D baroclinic model of the Oeregrundsgrepen area for a whole-year period with data pertaining to 1992. The results of these variations are compared to a nominal run with unaltered physical forcing. This nominal simulation is based on the experience gained in an earlier whole-year modelling of the same area; the difference is mainly that the present nominal simulation is run with identical parameters for the whole year. From a computational economy point of view it has been necessary to vary the time step between the month-long simulation periods. For all simulations with varied forcing, the same time step as for the nominal run has been used. The analysis also comprises the water turnover of a hypsographically defined subsection, the Bio Model area, located above the SFR depository. The external forcing factors that have been varied are the following (with their found relative impact on the volume average of the retention time of the Bio Model area over one year given within parentheses): atmospheric temperature increased/reduced by 2.5 deg C (-0.1% resp. +0.6%), local freshwater discharge rate doubled/halved (-1.6% resp. +0.01%), salinity range at the border increased/reduced a factor 2 (-0.84% resp. 0.00%), wind speed forcing reduced 10% (+8.6%). The results of these simulations, at least the yearly averages, permit a reasonably direct physical explanation, while the detailed dynamics is for natural reasons more intricate. Two additional full-year simulations of possible future hydrographic regimes have also been performed. The first mimics a hypothetical situation with permanent ice cover, which increases the average retention time 87%. The second regime entails the future hypsography with its anticipated shoreline displacement by an 11 m land-rise in the year 4000 AD, which also considerably increases the average retention times for the two remaining layers of the

  15. Sensitivity of terrestrial ecosystems to elevated atmospheric CO{sub 2}: Comparisons of model simulation studies to CO{sub 2} effect

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Y. [Marine Biological Lab., Woods Hole, MA (United States)

    1995-06-01

    In the context of a project to compare terrestrial ecosystem models, the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP), we have analyzed how three biogeochemistry models link plant growth to doubled atmospheric CO{sub 2}. A common set of input data was used to drive three biogeochemistry models, BIOME-BGC, CENTURY and TEM. For the continental United States the simulation results show that with doubled CO{sub 2}, NPP increased by 8.7%, 5.0% and 10.8% for TEM, CENTURY and BIOME-BGC, respectively. At the biome level the range of NPP estimates varied considerably among models. TEM-simulated enhancement of NPP ranged from 2% to 28%; CENTURY, from 2% to 9%; and BIOME-BGC, from 4% to 27%. A transect analysis across several biomes along a latitude at 41.5 N shows that the TEM-simulated CO{sub 2} enhancement of NPP ranged from 0% to 22%; CENTURY, from 1% to 10% and BIOME-BGC, from 1% to 63%. In this study, we have investigated the underlying mechanisms of the three models to reveal how increased CO{sub 2} affects photosynthesis rate, water using efficiency and nutrient cycles. The relative importance of these mechanisms in each of the three biogeochemistry models will be discussed.

  16. Sensitivity study of heavy precipitation in Limited Area Model climate simulations: influence of the size of the domain and the use of the spectral nudging technique

    Science.gov (United States)

    Colin, Jeanne; Déqué, Michel; Radu, Raluca; Somot, Samuel

    2010-10-01

    We assess the impact of two sources of uncertainties in a limited area model (LAM) on the representation of intense precipitation: the size of the domain of integration and the use of the spectral nudging technique (driving of the large-scale within the domain of integration). We work in a perfect-model approach where the LAM is driven by a general circulation model (GCM) run at the same resolution and sharing the same physics and dynamics as the LAM. A set of three 50 km resolution simulations run over Western Europe with the LAM ALADIN-Climate and the GCM ARPEGE-Climate are performed to address this issue. Results are consistent with previous studies regarding the seasonal-mean fields. Furthermore, they show that neither the use of the spectral nudging nor the choice of a small domain are detrimental to the modelling of heavy precipitation in the present experiment.

  17. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  18. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  19. Models for patients' recruitment in clinical trials and sensitivity analysis.

    Science.gov (United States)

    Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas

    2012-07-20

    Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.

  20. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  1. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  2. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  3. A context-sensitive trust model for online social networking

    CSIR Research Space (South Africa)

    Danny, MN

    2016-11-01

    Full Text Available of privacy attacks. In the quest to address this problem, this paper proposes a context-sensitive trust model. The proposed trust model was designed using fuzzy logic theory and implemented using MATLAB. Contrary to existing trust models, the context...

  4. A model to estimate insulin sensitivity in dairy cows

    OpenAIRE

    Holtenius, Paul; Holtenius, Kjell

    2007-01-01

    Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an i...

  5. Uncertainty and sensitivity analysis of environmental transport models

    International Nuclear Information System (INIS)

    Margulies, T.S.; Lancaster, L.E.

    1985-01-01

    An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables

  6. PDF sensitivity studies from ATLAS measurements

    CERN Document Server

    Hanna, Remie; The ATLAS collaboration

    2015-01-01

    Several measurements performed by the ATLAS collaboration are either useful to constrain the proton structure or are affected by its associated uncertainties. The strange-quark density is rather poorly known at low x. Measurements of the W+c production and the inclusive W and Z differential cross sections are found to constrain the strange-quark density. Drell-Yan cross section measurements performed above and below the Z peak region have a different sensitivity to parton flavour, parton momentum fraction x and scale Q compared to measurements on the Z peak and can also be used to constrain the photon content of the proton. Measurements of the inclusive jet and photon cross sections are standard candles and can be useful to constrain the medium and high x gluon densities. Precision electroweak studies performed by ATLAS can be limited by the current knowledge on the proton structure. Among those are the measurement of the effective weak mixing angle and the mass of the W boson. Dedicated PDF studies were perf...

  7. Very large fMRI study using the IMAGEN database: Sensitivity-specificity and population effect modeling in relation to the underlying anatomy

    International Nuclear Information System (INIS)

    Thyreau, Benjamin; Schwartz, Yannick; Thirion, Bertrand; Frouin, Vincent; Loth, Eva; Conrod, Patricia J.; Schumann, Gunter; Vollstadt-Klein, Sabine; Paus, Tomas; Artiges, Eric; Whelan, Robert; Poline, Jean-Baptiste

    2012-01-01

    In this paper we investigate the use of classical fMRI Random Effect (RFX) group statistics when analyzing a very large cohort and the possible improvement brought from anatomical information. Using 1326 subjects from the IMAGEN study, we first give a global picture of the evolution of the group effect t-value from a simple face-watching contrast with increasing cohort size. We obtain a wide activated pattern, far from being limited to the reasonably expected brain areas, illustrating the difference between statistical significance and practical significance. This motivates us to inject tissue-probability information into the group estimation, we model the BOLD contrast using a matter-weighted mixture of Gaussians and compare it to the common, single-Gaussian model. In both cases, the model parameters are estimated per-voxel for one subgroup, and the likelihood of both models is computed on a second, separate subgroup to reflect model generalization capacity. Various group sizes are tested, and significance is asserted using a 10-fold cross-validation scheme. We conclude that adding matter information consistently improves the quantitative analysis of BOLD responses in some areas of the brain, particularly those where accurate inter-subject registration remains challenging. (authors)

  8. Unique proteomic signature for radiation sensitive patients; a comparative study between normo-sensitive and radiation sensitive breast cancer patients

    Energy Technology Data Exchange (ETDEWEB)

    Skiöld, Sara [Center for Radiation Protection Research, Department of Molecular Biosciences, The Wernner-Gren Institute, Stockholm University, Stockholm (Sweden); Azimzadeh, Omid [Institute of Radiation Biology, German Research Center for Environmental Health, Helmholtz Zentrum München (Germany); Merl-Pham, Juliane [Research Unit Protein Science, German Research Center for Environmental Health, Helmholtz Zentrum München, Neuherberg (Germany); Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet [Division of Radiotherapy, Radiumhemmet, Karolinska University Hospital, Stockholm (Sweden); Tapio, Soile [Institute of Radiation Biology, German Research Center for Environmental Health, Helmholtz Zentrum München (Germany); Harms-Ringdahl, Mats [Center for Radiation Protection Research, Department of Molecular Biosciences, The Wernner-Gren Institute, Stockholm University, Stockholm (Sweden); Haghdoost, Siamak, E-mail: Siamak.Haghdoost@su.se [Center for Radiation Protection Research, Department of Molecular Biosciences, The Wernner-Gren Institute, Stockholm University, Stockholm (Sweden)

    2015-06-15

    Highlights: • The unique protein expression profiles were found that separate radiosensitive from normal sensitive breast cancer patients. • The oxidative stress response, coagulation properties and acute phase response suggested to be the hallmarks of radiation sensitivity. - Abstract: Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37 °C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy.

  9. Unique proteomic signature for radiation sensitive patients; a comparative study between normo-sensitive and radiation sensitive breast cancer patients

    International Nuclear Information System (INIS)

    Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak

    2015-01-01

    Highlights: • The unique protein expression profiles were found that separate radiosensitive from normal sensitive breast cancer patients. • The oxidative stress response, coagulation properties and acute phase response suggested to be the hallmarks of radiation sensitivity. - Abstract: Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37 °C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy

  10. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  11. Blade row dynamic digital compression program. Volume 2: J85 circumferential distortion redistribution model, effect of Stator characteristics, and stage characteristics sensitivity study

    Science.gov (United States)

    Tesch, W. A.; Steenken, W. G.

    1978-01-01

    The results of dynamic digital blade row compressor model studies of a J85-13 engine are reported. The initial portion of the study was concerned with the calculation of the circumferential redistribution effects in the blade-free volumes forward and aft of the compression component. Although blade-free redistribution effects were estimated, no significant improvement over the parallel-compressor type solution in the prediction of total-pressure inlet distortion stability limit was obtained for the J85-13 engine. Further analysis was directed to identifying the rotor dynamic response to spatial circumferential distortions. Inclusion of the rotor dynamic response led to a considerable gain in the ability of the model to match the test data. The impact of variable stator loss on the prediction of the stability limit was evaluated. An assessment of measurement error on the derivation of the stage characteristics and predicted stability limit of the compressor was also performed.

  12. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  13. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  14. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  15. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  16. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    Science.gov (United States)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  17. Do lateral boundary condition update frequency and the resolution of the boundary data affect the regional model COSMO-CLM? A sensitivity study.

    Science.gov (United States)

    Pankatz, K.; Kerkweg, A.

    2014-12-01

    The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the german Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In regional climate modeling it is common to update the lateral boundary conditions (LBC) of the regional model every six hours. This is mainly due to the fact, that reference data sets like ERA are only available every six hours. Additionally, for offline coupling procedures it would be too costly to store LBC data in higher temporal resolution for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBC has a significant effect on the climate in the domain of the regional model (RCM). This study uses the RCM COSMO-CLM/MESSy (Kerkweg and Jöckel, 2012) to couple COSMO-CLM offline to the GCM ECHAM5. One study examines a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in regional domain shows only small deviations, some stastically significant though, of 2m temperature, sea level pressure and precipitaion.The second scope of the study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations.The second study examines the quality of decadal hind-casts of the decade 2001-2010 when the horizontal resolution of the driving model, namely T42, T63, T85, T106, from which the LBC are calculated, is altered. Two sets of simulations are evaluated. For the first set of simulations, the GCM simulations are performed at different resolutions using the same boundary conditions for GHGs and SSTs, thus

  18. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  19. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  20. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  1. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  2. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  3. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Sensitivity of the radiative forcing by stratospheric sulfur geoengineering to the amount and strategy of the SO2injection studied with the LMDZ-S3A model

    Science.gov (United States)

    Kleinschmitt, Christoph; Boucher, Olivier; Platt, Ulrich

    2018-02-01

    The enhancement of the stratospheric sulfate aerosol layer has been proposed as a method of geoengineering to abate global warming. Previous modelling studies found that stratospheric aerosol geoengineering (SAG) could effectively compensate for the warming by greenhouse gases on the global scale, but also that the achievable cooling effect per sulfur mass unit, i.e. the forcing efficiency, decreases with increasing injection rate. In this study we use the atmospheric general circulation model LMDZ with the sectional aerosol module S3A to determine how the forcing efficiency depends on the injected amount of SO2, the injection height, and the spatio-temporal pattern of injection. We find that the forcing efficiency may decrease more drastically for larger SO2 injections than previously estimated. As a result, the net instantaneous radiative forcing does not exceed the limit of -2 W m-2 for continuous equatorial SO2 injections and it decreases (in absolute value) for injection rates larger than 20 Tg S yr-1. In contrast to other studies, the net radiative forcing in our experiments is fairly constant with injection height (in a range 17 to 23 km) for a given amount of SO2 injected. Also, spreading the SO2 injections between 30° S and 30° N or injecting only seasonally from varying latitudes does not result in a significantly larger (i.e. more negative) radiative forcing. Other key characteristics of our simulations include a consequent stratospheric heating, caused by the absorption of solar and infrared radiation by the aerosol, and changes in stratospheric dynamics, with a collapse of the quasi-biennial oscillation at larger injection rates, which has impacts on the resulting spatial aerosol distribution, size, and optical properties. But it has to be noted that the complexity and uncertainty of stratospheric processes cause considerable disagreement among different modelling studies of stratospheric aerosol geoengineering. This may be addressed through detailed

  5. Possibilities of surface-sensitive X-ray methods for studying the molecular mechanisms of interaction of nanoparticles with model membranes

    Energy Technology Data Exchange (ETDEWEB)

    Novikova, N. N., E-mail: nn-novikova07@yandex.ru; Kovalchuk, M. V.; Yakunin, S. N. [National Research Centre “Kurchatov Institute,” (Russian Federation); Konovalov, O. V. [European Synchrotron Radiation Facility (France); Stepina, N. D. [Russian Academy of Sciences, Shubnikov Institute of Crystallography, Federal Scientific Research Centre “Crystallography and Photonics,” (Russian Federation); Rogachev, A. V. [National Research Centre “Kurchatov Institute,” (Russian Federation); Yurieva, E. A. [Moscow Research Institute of Pediatrics and Pediatric Surgery (Russian Federation); Marchenko, I. V.; Bukreeva, T. V. [National Research Centre “Kurchatov Institute,” (Russian Federation); Ivanova, O. S.; Baranchikov, A. E.; Ivanov, V. K. [Russian Academy of Sciences, Kurnakov Institute of General and Inorganic Chemistry (Russian Federation)

    2016-09-15

    The processes of structural rearrangement in a model membrane, i.e., an arachic acid monolayer formed on a colloidal solution of cerium dioxide or magnetite, are studied in situ in real time by the methods of X-ray standing waves and 2D diffraction. It is shown that the character of the interaction of nanoparticles with the monolayer is determined by their nature and sizes and depends on the conditions of nanoparticle synthesis. In particular, the structure formation in the monolayer–particle system is greatly affected by the stabilizer (citric acid), which is introduced into the colloidal solution during synthesis.

  6. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  7. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  8. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  9. Sensitivity of modeled ozone concentrations to uncertainties in biogenic emissions

    International Nuclear Information System (INIS)

    Roselle, S.J.

    1992-06-01

    The study examines the sensitivity of regional ozone (O3) modeling to uncertainties in biogenic emissions estimates. The United States Environmental Protection Agency's (EPA) Regional Oxidant Model (ROM) was used to simulate the photochemistry of the northeastern United States for the period July 2-17, 1988. An operational model evaluation showed that ROM had a tendency to underpredict O3 when observed concentrations were above 70-80 ppb and to overpredict O3 when observed values were below this level. On average, the model underpredicted daily maximum O3 by 14 ppb. Spatial patterns of O3, however, were reproduced favorably by the model. Several simulations were performed to analyze the effects of uncertainties in biogenic emissions on predicted O3 and to study the effectiveness of two strategies of controlling anthropogenic emissions for reducing high O3 concentrations. Biogenic hydrocarbon emissions were adjusted by a factor of 3 to account for the existing range of uncertainty in these emissions. The impact of biogenic emission uncertainties on O3 predictions depended upon the availability of NOx. In some extremely NOx-limited areas, increasing the amount of biogenic emissions decreased O3 concentrations. Two control strategies were compared in the simulations: (1) reduced anthropogenic hydrocarbon emissions, and (2) reduced anthropogenic hydrocarbon and NOx emissions. The simulations showed that hydrocarbon emission controls were more beneficial to the New York City area, but that combined NOx and hydrocarbon controls were more beneficial to other areas of the Northeast. Hydrocarbon controls were more effective as biogenic hydrocarbon emissions were reduced, whereas combined NOx and hydrocarbon controls were more effective as biogenic hydrocarbon emissions were increased

  10. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  11. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  12. Sensitivity of a complex urban air quality model to input data

    International Nuclear Information System (INIS)

    Seigneur, C.; Tesche, T.W.; Roth, P.M.; Reid, L.E.

    1981-01-01

    In recent years, urban-scale photochemical simulation models have been developed that are of practical value for predicting air quality and analyzing the impacts of alternative emission control strategies. Although the performance of some urban-scale models appears to be acceptable, the demanding data requirements of such models have prompted concern about the costs of data acquistion, which might be high enough to preclude use of photochemical models for many urban areas. To explore this issue, sensitivity studies with the Systems Applications, Inc. (SAI) Airshed Model, a grid-based time-dependent photochemical dispersion model, have been carried out for the Los Angeles basin. Reductions in the amount and quality of meteorological, air quality and emission data, as well as modifications of the model gridded structure, have been analyzed. This paper presents and interprets the results of 22 sensitivity studies. A sensitivity-uncertainty index is defined to rank input data needs for an urban photochemical model. The index takes into account the sensitivity of model predictions to the amount of input data, the costs of data acquistion, and the uncertainties in the air quality model input variables. The results of these sensitivity studies are considered in light of the limitations of specific attributes of the Los Angeles basin and of the modeling conditions (e.g., choice of wind model, length of simulation time). The extent to which the results may be applied to other urban areas also is discussed

  13. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja; Navarro, Marí a; Merks, Roeland; Blom, Joke

    2015-01-01

    ) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided

  14. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  15. Modeling sensitivity study of the possible impact of snow and glaciers developing over Tibetan Plateau on Holocene African-Asian summer monsoon climate

    Directory of Open Access Journals (Sweden)

    L. Jin

    2009-08-01

    Full Text Available The impacts of various scenarios of a gradual snow and glaciers developing over the Tibetan Plateau on climate change in Afro-Asian monsoon region and other regions during the Holocene (9 kyr BP–0 kyr BP are studied by using the Earth system model of intermediate complexity, CLIMBER-2. The simulations show that the imposed snow and glaciers over the Tibetan Plateau in the mid-Holocene induce global summer temperature decreases over most of Eurasia but in the Southern Asia temperature response is opposite. With the imposed snow and glaciers, summer precipitation decreases strongly in North Africa and South Asia as well as northeastern China, while it increases in Southeast Asia and the Mediterranean. For the whole period of Holocene (9 kyr BP–0 kyr BP, the response of vegetation cover to the imposed snow and glaciers cover over the Tibetan Plateau is not synchronous in South Asia and in North Africa, showing an earlier and a more rapid decrease in vegetation cover in North Africa from 9 kyr BP to 6 kyr BP while it has only minor influence on that in South Asia until 5 kyr BP. The precipitation decreases rapidly in North Africa and South Asia while it decreases slowly or unchanged during 6 kyr BP to 0 kyr BP with imposed snow and glacier cover over the Tibetan Plateau. The different scenarios of snow and glacier developing over the Tibetan Plateau would result in differences in variation of temperature, precipitation and vegetation cover in North Africa, South Asia and Southeast Asia. The model results suggest that the development of snow and ice cover over Tibetan Plateau represents an additional important climate feedback, which amplify orbital forcing and produces a significant synergy with the positive vegetation feedback.

  16. Conformational sensitivity of conjugated poly(ethylene oxide)-poly(amidoamine) molecules to cations adducted upon electrospray ionization – A mass spectrometry, ion mobility and molecular modeling study

    Energy Technology Data Exchange (ETDEWEB)

    Tintaru, Aura [Aix-Marseille Université – CNRS, UMR 7273, Institut de Chimie Radicalaire, Marseille (France); Chendo, Christophe [Aix-Marseille Université – CNRS, FR 1739, Fédération des Sciences Chimiques de Marseille, Spectropole, Marseille (France); Wang, Qi [Aix-Marseille Université – CNRS, UMR 6114, Centre Interdisciplinaire de Nanosciences de Marseille, Marseille (France); Viel, Stéphane [Aix-Marseille Université – CNRS, UMR 7273, Institut de Chimie Radicalaire, Marseille (France); Quéléver, Gilles; Peng, Ling [Aix-Marseille Université – CNRS, UMR 6114, Centre Interdisciplinaire de Nanosciences de Marseille, Marseille (France); Posocco, Paola [University of Trieste, Molecular Simulation Engineering (MOSE) Laboratory, Department of Engineering and Architecture (DEA), Trieste (Italy); National Interuniversity Consortium for Material Science and Technology (INSTM), Research Unit MOSE-DEA, University of Trieste, Trieste (Italy); Pricl, Sabrina, E-mail: sabrina.pricl@di3.units.it [University of Trieste, Molecular Simulation Engineering (MOSE) Laboratory, Department of Engineering and Architecture (DEA), Trieste (Italy); National Interuniversity Consortium for Material Science and Technology (INSTM), Research Unit MOSE-DEA, University of Trieste, Trieste (Italy); Charles, Laurence, E-mail: laurence.charles@univ-amu.fr [Aix-Marseille Université – CNRS, UMR 7273, Institut de Chimie Radicalaire, Marseille (France)

    2014-01-15

    Graphical abstract: -- Highlights: •ESI-MS/MS, IMS and molecular modeling were combined to study PEO-PAMAM conformation. •Protonated and lithiated molecules were studied, with charge states from 2 to 4. •Protonation mostly occurred on PAMAM, with PEO units enclosing the protonated group. •Lithium adduction on PEO units lead to more expanded conformations. •Charge location strongly influenced PEO-PAMAM dissociation behavior. -- Abstract: Tandem mass spectrometry and ion mobility spectrometry experiments were performed on multiply charged molecules formed upon conjugation of a poly(amidoamine) (PAMAM) dendrimer with a poly(ethylene oxide) (PEO) linear polymer to evidence any conformational modification as a function of their charge state (2+ to 4+) and of the adducted cation (H{sup +}vs Li{sup +}). Experimental findings were rationalized by molecular dynamics simulations. The G0 PAMAM head-group could accommodate up to three protons, with protonated terminal amine group enclosed in a pseudo 18-crown-6 ring formed by the PEO segment. This particular conformation enabled a hydrogen bond network which allowed long-range proton transfer to occur during collisionally activated dissociation. In contrast, lithium adduction was found to mainly occur onto oxygen atoms of the polyether, each Li{sup +} cation being coordinated by a 12-crown-4 pseudo structure. As a result, for the studied polymeric segment (M{sub n} = 1500 g mol{sup −1}), PEO-PAMAM hybrid molecules exhibited a more expanded shape when adducted to lithium as compared to proton.

  17. Nordic reference study on uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Hirschberg, S.; Jacobsson, P.; Pulkkinen, U.; Porn, K.

    1989-01-01

    This paper provides a review of the first phase of Nordic reference study on uncertainty and sensitivity analysis. The main objective of this study is to use experiences form previous Nordic Benchmark Exercises and reference studies concerning critical modeling issues such as common cause failures and human interactions, and to demonstrate the impact of associated uncertainties on the uncertainty of the investigated accident sequence. This has been done independently by three working groups which used different approaches to modeling and to uncertainty analysis. The estimated uncertainty interval for the analyzed accident sequence is large. Also the discrepancies between the groups are substantial but can be explained. Sensitivity analyses which have been carried out concern e.g. use of different CCF-quantification models, alternative handling of CCF-data, time windows for operator actions and time dependences in phase mission operation, impact of state-of-knowledge dependences and ranking of dominating uncertainty contributors. Specific findings with respect to these issues are summarized in the paper

  18. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  19. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  20. Sensitivity of wildlife habitat models to uncertainties in GIS data

    Science.gov (United States)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  1. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  2. A bend thickness sensitivity study of Candu feeder piping

    International Nuclear Information System (INIS)

    Li, M.; Aggarwal, M.L.; Meysner, A.; Micelotta, C.

    2005-01-01

    In CANDU reactors, feeder bends close to the connection at the fuel channel may be subjected to the highest Flow Accelerated Corrosion (FAC) and stresses. Feeder pipe stress analysis is crucial in the life extension of aging CANDU plants. Typical feeder pipes are interconnected by upper link plates and spacers. It is well known that the stresses at the bends are sensitive to the local bend thicknesses. It is also known from the authors' study (Li and et al, 2005) that feeder inter linkage effect is significant and cannot be ignored. The field measurement of feeder bend thickness is difficult and may be subjected to uncertainty in accuracy. Hence, it is desirable to know how the stress on a subject feeder could be affected by the bend thickness variation of the neighboring feeders. This effect cannot be evaluated by the traditional 'single' feeder model approach. In this paper, the 'row' and 'combined' models developed in the previous study (Li and et al, 2005), which include the feeder interactions, are used to investigate the sensitivity of bend thickness. A series of random thickness bounded by maximum and minimum measured values were applied to feeders in the model. The results show that an individual feeder is not sensitive to the bend thickness variation of the remaining feeders in the model, but depends primarily on its own bend thickness. The highest stress at a feeder always occurs when the feeder has the smallest possible bend thickness. A minimum acceptable bend thickness for individual feeders can be computed by an iterative computing process. The dependency of field thickness measurement and the amount of required analysis work can be greatly reduced. (authors)

  3. Sex and smoking sensitive model of radon induced lung cancer

    International Nuclear Information System (INIS)

    Zhukovsky, M.; Yarmoshenko, I.

    2006-01-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  4. Sex and smoking sensitive model of radon induced lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhukovsky, M.; Yarmoshenko, I. [Institute of Industrial Ecology of Ural Branch of Russian Academy of Sciences, Yekaterinburg (Russian Federation)

    2006-07-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  5. Microvessel rupture induced by high-intensity therapeutic ultrasound-a study of parameter sensitivity in a simple in vivo model.

    Science.gov (United States)

    Kim, Yeonho; Nabili, Marjan; Acharya, Priyanka; Lopez, Asis; Myers, Matthew R

    2017-01-01

    Safety analyses of transcranial therapeutic ultrasound procedures require knowledge of the dependence of the rupture probability and rupture time upon sonication parameters. As previous vessel-rupture studies have concentrated on a specific set of exposure conditions, there is a need for more comprehensive parametric studies. Probability of rupture and rupture times were measured by exposing the large blood vessel of a live earthworm to high-intensity focused ultrasound pulse trains of various characteristics. Pressures generated by the ultrasound transducers were estimated through numerical solutions to the KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation. Three ultrasound frequencies (1.1, 2.5, and 3.3 MHz) were considered, as were three pulse repetition frequencies (1, 3, and 10 Hz), and two duty factors (0.0001, 0.001). The pressures produced ranged from 4 to 18 MPa. Exposures of up to 10 min in duration were employed. Trials were repeated an average of 11 times. No trends as a function of pulse repetition rate were identifiable, for either probability of rupture or rupture time. Rupture time was found to be a strong function of duty factor at the lower pressures; at 1.1 MHz the rupture time was an order of magnitude lower for the 0.001 duty factor than the 0.0001. At moderate pressures, the difference between the duty factors was less, and there was essentially no difference between duty factors at the highest pressure. Probability of rupture was not found to be a strong function of duty factor. Rupture thresholds were about 4 MPa for the 1.1 MHz frequency, 7 MPa at 3.3 MHz, and 11 MPa for the 2.5 MHz, though the pressure value at 2.5 MHz frequency will likely be reduced when steep-angle corrections are accounted for in the KZK model used to estimate pressures. Mechanical index provided a better collapse of the data (less separation of the curves pertaining to the different frequencies) than peak negative pressure, for both probability of rupture and

  6. Review of high-sensitivity Radon studies

    Science.gov (United States)

    Wojcik, M.; Zuzel, G.; Simgen, H.

    2017-10-01

    A challenge in many present cutting-edge particle physics experiments is the stringent requirements in terms of radioactive background. In peculiar, the prevention of Radon, a radioactive noble gas, which occurs from ambient air and it is also released by emanation from the omnipresent progenitor Radium. In this paper we review various high-sensitivity Radon detection techniques and approaches, applied in the experiments looking for rare nuclear processes happening at low energies. They allow to identify, quantitatively measure and finally suppress the numerous sources of Radon in the detectors’ components and plants.

  7. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  8. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  9. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  10. A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area

    Science.gov (United States)

    Žabkar, Rahela; Koračin, Darko; Rakovec, Jože

    2013-10-01

    A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.

  11. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  12. The sensitivity of the Arctic sea ice to orbitally induced insolation changes: a study of the mid-Holocene Paleoclimate Modelling Intercomparison Project 2 and 3 simulations

    Directory of Open Access Journals (Sweden)

    M. Berger

    2013-04-01

    Full Text Available In the present work the Arctic sea ice in the mid-Holocene and the pre-industrial climates are analysed and compared on the basis of climate-model results from the Paleoclimate Modelling Intercomparison Project phase 2 (PMIP2 and phase 3 (PMIP3. The PMIP3 models generally simulate smaller and thinner sea-ice extents than the PMIP2 models both for the pre-industrial and the mid-Holocene climate. Further, the PMIP2 and PMIP3 models all simulate a smaller and thinner Arctic summer sea-ice cover in the mid-Holocene than in the pre-industrial control climate. The PMIP3 models also simulate thinner winter sea ice than the PMIP2 models. The winter sea-ice extent response, i.e. the difference between the mid-Holocene and the pre-industrial climate, varies among both PMIP2 and PMIP3 models. Approximately one half of the models simulate a decrease in winter sea-ice extent and one half simulates an increase. The model-mean summer sea-ice extent is 11 % (21 % smaller in the mid-Holocene than in the pre-industrial climate simulations in the PMIP2 (PMIP3. In accordance with the simple model of Thorndike (1992, the sea-ice thickness response to the insolation change from the pre-industrial to the mid-Holocene is stronger in models with thicker ice in the pre-industrial climate simulation. Further, the analyses show that climate models for which the Arctic sea-ice responses to increasing atmospheric CO2 concentrations are similar may simulate rather different sea-ice responses to the change in solar forcing between the mid-Holocene and the pre-industrial. For two specific models, which are analysed in detail, this difference is found to be associated with differences in the simulated cloud fractions in the summer Arctic; in the model with a larger cloud fraction the effect of insolation change is muted. A sub-set of the mid-Holocene simulations in the PMIP ensemble exhibit open water off the north-eastern coast of Greenland in summer, which can provide a fetch

  13. Sensitivity analysis in oxidation ditch modelling: the effect of variations in stoichiometric, kinetic and operating parameters on the performance indices

    NARCIS (Netherlands)

    Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.

    2001-01-01

    This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis

  14. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  15. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    Science.gov (United States)

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  16. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    Science.gov (United States)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  17. Stimulus Sensitivity of a Spiking Neural Network Model

    Science.gov (United States)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  18. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  19. Sensitivity of the urban airshed model to mixing height profiles

    Energy Technology Data Exchange (ETDEWEB)

    Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W. [New York State Dept. of Environmental Conservation, Albany, NY (United States)

    1994-12-31

    The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.

  20. Comparison of adsorption equilibrium and kinetic models for a case study of pharmaceutical active ingredient adsorption from fermentation broths: parameter determination, simulation, sensitivity analysis and optimization

    Directory of Open Access Journals (Sweden)

    B. Likozar

    2012-09-01

    Full Text Available Mathematical models for a batch process were developed to predict concentration distributions for an active ingredient (vancomycin adsorption on a representative hydrophobic-molecule adsorbent, using differently diluted crude fermentation broth with cells as the feedstock. The kinetic parameters were estimated using the maximization of the coefficient of determination by a heuristic algorithm. The parameters were estimated for each fermentation broth concentration using four concentration distributions at initial vancomycin concentrations of 4.96, 1.17, 2.78, and 5.54 g l−¹. In sequence, the models and their parameters were validated for fermentation broth concentrations of 0, 20, 50, and 100% (v/v by calculating the coefficient of determination for each concentration distribution at the corresponding initial concentration. The applicability of the validated models for process optimization was investigated by using the models as process simulators to optimize the two process efficiencies.

  1. Large regional groundwater modeling - a sensitivity study of some selected conceptual descriptions and simplifications; Storregional grundvattenmodellering - en kaenslighetsstudie av naagra utvalda konceptuella beskrivningar och foerenklingar

    Energy Technology Data Exchange (ETDEWEB)

    Ericsson, Lars O. (Lars O. Ericsson Consulting AB (Sweden)); Holmen, Johan (Golder Associates (Sweden))

    2010-12-15

    The primary aim of this report is: - To present a supplementary, in-depth evaluation of certain conceptual simplifications, descriptions and model uncertainties in conjunction with regional groundwater simulation, which in the first instance refer to model depth, topography, groundwater table level and boundary conditions. Implementation was based on geo-scientifically available data compilations from the Smaaland region but different conceptual assumptions have been analysed

  2. Cavitation effects in LMFBR containment loading - a sensitivity study

    International Nuclear Information System (INIS)

    Jones, A.V.

    1981-01-01

    The motivation for and design of a sensitivity study into the effects of bulk cavitation of the coolant upon predicted roof loadings and vessel wall loadings and deformations are presented. The study is designed to cover simple and sophisticated models of cavitation in various geometries and with two types of energy source to represent both an explosion charge and the lower pressure expansion behavior expected in a real core disruptive accident. Effects of change of scale (from reactor to model), of coolant tensile strength, of reactor aspect ratio and design (presence or absence of an internal tank) and of reactor structural resistance (rigid or deforming outer tank) are all examined in order to provide a quantitative answer to the question 'how and to what extent does dynamic cavitation affect the containment loading process.'. (orig.)

  3. A model to estimate insulin sensitivity in dairy cows

    Directory of Open Access Journals (Sweden)

    Holtenius Kjell

    2007-10-01

    Full Text Available Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI is based on plasma concentrations of glucose, insulin and free fatty acids (FFA and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.

  4. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  5. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Science.gov (United States)

    Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda

    2016-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102

  6. Computer models versus reality: how well do in silico models currently predict the sensitization potential of a substance.

    Science.gov (United States)

    Teubner, Wera; Mehling, Anette; Schuster, Paul Xaver; Guth, Katharina; Worth, Andrew; Burton, Julien; van Ravenzwaay, Bennard; Landsiedel, Robert

    2013-12-01

    National legislations for the assessment of the skin sensitization potential of chemicals are increasingly based on the globally harmonized system (GHS). In this study, experimental data on 55 non-sensitizing and 45 sensitizing chemicals were evaluated according to GHS criteria and used to test the performance of computer (in silico) models for the prediction of skin sensitization. Statistic models (Vega, Case Ultra, TOPKAT), mechanistic models (Toxtree, OECD (Q)SAR toolbox, DEREK) or a hybrid model (TIMES-SS) were evaluated. Between three and nine of the substances evaluated were found in the individual training sets of various models. Mechanism based models performed better than statistical models and gave better predictivities depending on the stringency of the domain definition. Best performance was achieved by TIMES-SS, with a perfect prediction, whereby only 16% of the substances were within its reliability domain. Some models offer modules for potency; however predictions did not correlate well with the GHS sensitization subcategory derived from the experimental data. In conclusion, although mechanistic models can be used to a certain degree under well-defined conditions, at the present, the in silico models are not sufficiently accurate for broad application to predict skin sensitization potentials. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Modeling the impacts of green infrastructure land use changes on air quality and meteorology case study and sensitivity analysis in Kansas City

    Science.gov (United States)

    Changes in vegetation cover associated with urban planning efforts may affect regional meteorology and air quality. Here we use a comprehensive coupled meteorology-air quality model (WRF-CMAQ) to simulate the influence of planned land use changes from green infrastructure impleme...

  8. The internal model: A study of the relative contribution of proprioception and visual information to failure detection in dynamic systems. [sensitivity of operators versus monitors to failures

    Science.gov (United States)

    Kessel, C.; Wickens, C. D.

    1978-01-01

    The development of the internal model as it pertains to the detection of step changes in the order of control dynamics is investigated for two modes of participation: whether the subjects are actively controlling those dynamics or are monitoring an autopilot controlling them. A transfer of training design was used to evaluate the relative contribution of proprioception and visual information to the overall accuracy of the internal model. Sixteen subjects either tracked or monitored the system dynamics as a 2-dimensional pursuit display under single task conditions and concurrently with a sub-critical tracking task at two difficulty levels. Detection performance was faster and more accurate in the manual as opposed to the autopilot mode. The concurrent tracking task produced a decrement in detection performance for all conditions though this was more marked for the manual mode. The development of an internal model in the manual mode transferred positively to the automatic mode producing enhanced detection performance. There was no transfer from the internal model developed in the automatic mode to the manual mode.

  9. Sensitivity study of land biosphere CO2 exchange through an atmospheric tracer transport model using satellite-derived vegetation index data

    International Nuclear Information System (INIS)

    Knorr, W.; Heimann, M.

    1994-01-01

    We develop a simple, globally uniform model of CO 2 exchange between the atmosphere and the terrestrial biosphere by coupling the model with a three-dimensional atmospheric tracer transport model using observed winds, and checking results against observed concentrations of CO 2 at various monitoring sites. CO 2 fluxes are derived from observed greenness using satellite-derived Global Vegetation Index data, combined with observations of temperature, radiation, and precipitation. We explore a range of CO 2 flux formulations together with some modifications of the modelled atmospheric transport. We find that while some formulations can be excluded, it cannot be decided whether or not to make CO 2 uptake and release dependent on water stress. It appears that the seasonality of net CO 2 fluxes in the tropics, which would be expected to be driven by water availability, is small and is therefore not visible in the seasonal cycle of atmospheric CO 2 . The latter is dominated largely by northern temperate and boreal vegetation, where seasonality is mostly temperature determined. We find some evidence that there is still considerable CO 2 release from soils during northern-hemisphere winter. An exponential air temperature dependence of soil release with a Q 10 of 1.5 is found to be most appropriate, with no cutoff at low freezing temperatures. This result is independent of the year from which observed winds were taken. This is remarkable insofar as year-to-year changes in modelled CO 2 concentrations caused by changes in the wind data clearly outweigh those caused by year-to-year variability in the climate and vegetation index data. (orig.)

  10. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  11. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  12. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  13. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  14. Sensitivity case study in dynamic reliability

    International Nuclear Information System (INIS)

    Kopustinskas, V.

    2001-01-01

    Recent trends in the risk assessments of the complex industrial plants show increased interest in dynamical models arising from the coupling of the probabilistic and deterministic approaches. Conventionally used static system models, represented by the fault/event trees can not reflect dynamic behaviour of the system and complex interaction between the process variables, components and human actions. The nature of the most complex industrial systems, like nuclear power plants (NPP) suggests that Markov type stochastic differential equations (SDEs) consisting of jump and drift components can be successfully used to represent and analyze the phenomena. This paper discuss possible applications of the SDEs in reliability problems. In particular, Accident Localization System (ALS) of the Ignalina NPP was analyzed as a benchmark for further investigations in this area. (author)

  15. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  16. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  17. INFLUENCE OF MODIFIED BIOFLAVONOIDS UPON EFFECTOR LYMPHOCYTES IN MURINE MODEL OF CONTACT SENSITIVITY

    Directory of Open Access Journals (Sweden)

    D. Z. Albegova

    2015-01-01

    Full Text Available Contact sensitivity reaction (CSR to 2,4-dinitrofluorobenzene (DNFB in mice is a model of in vivo immune response, being an experimental analogue to contact dermatitis in humans. CSR sensitization phase begins after primary contact with antigen, lasting for 10-15 days in humans, and 5-7 days, in mice. Repeated skin exposure to the sensitizing substance leads to its recognition and triggering immune inflammatory mechanisms involving DNFB-specific effector T lymphocytes. The CSR reaches its maximum 18-48 hours after re-exposure to a hapten. There is only scarce information in the literature about effects of flavonoids on CSR, including both stimulatory and inhibitory effects. Flavonoids possessed, predominantly, suppressive effects against the CSR development. In our laboratory, a model of contact sensitivity was reproduced in CBA mice by means of cutaneous sensitization by 2,4-dinitrofluorobenzene. The aim of the study was to identify the mechanisms of immunomodulatory action of quercetin dihydrate and modified bioflavonoids, using the method of adoptive transfer contact sensitivity by splenocytes and T-lymphocytes. As shown in our studies, a 30-min pre-treatment of splenocytes and T-lymphocytes from sensitized mice with modified bioflavonoids before the cell transfer caused complete prevention of contact sensitivity reaction in syngeneic recipient mice. Meanwhile, this effect was not associated with cell death induction due to apoptosis or cytotoxicity. Quercetin dihydrate caused only partially suppression the activity of adaptively formed T-lymphocytes, the contact sensitivity effectors. It was shown that the modified bioflavonoid more stronger suppress adoptive transfer of contact sensitivity in comparison with quercetin dehydrate, without inducing apoptosis of effector cells. Thus, the modified bioflavonoid is a promising compound for further studies in a model of contact sensitivity, due to its higher ability to suppress transfer of CSR with

  18. A GoldSim Model and a Sensitivity Study for Safety Assessment of a Repository for Disposal of Spent Nuclear Fuel

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Hwang, Yong Soo

    2008-11-01

    An assessment program for the evaluation of a high-level waste (HLW) repository has been developed by utilizing GoldSim, by which nuclide transports in the near- and far-field of a repository as well as a transport through a biosphere under various natural and manmade disruptive events affecting a nuclide release could be modeled and evaluated. To demonstrate its usability, three illustrative cases including the influence of a groundwater flow pattern through canisters associated with a flowing groundwater through fractures, and the possible disruptive events caused by an accidental human intrusion or an earthquake have been investigated and illustrated for a hypothetical Korean HLW repository

  19. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  20. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  1. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Directory of Open Access Journals (Sweden)

    Hayden J. R. Woodley

    2016-01-01

    Full Text Available The construct of equity sensitivity describes an individual’s preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called Benevolents. Individuals low on equity sensitivity are more outcome oriented, and are described as Entitleds. Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170, two personality models (i.e., the Big Five and HEXACO, the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory, and using both self and peer reports of personality (in Study 3. Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model’s dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  2. Modeling the Residual Stresses in Reactive Resins-Based Materials: a Case Study of Photo-Sensitive Composites for Dental Applications

    International Nuclear Information System (INIS)

    Grassia, Luigi; D'Amore, Alberto

    2010-01-01

    Residual stresses in reactive resins-based composites are associated to the net volumetric contraction (shrinkage) arising during the cross-linking reactions. Depending on the restoration geometry (the ratio of the free surface area to the volume of the cavity) the frozen-in stresses can be as high as the strength of the dental composites. This is the main reason why the effectiveness and then the durability of restorations with composites remains quite lower than those realized with metal alloys based materials. In this paper we first explore the possibility to circumvent the mathematical complexity arising from the determination of residual stresses in reactive systems three-dimensionally constrained. Then, the results of our modeling approach are applied to a series of commercially available composites showing that almost all samples develop residual stresses such that the restoration undergoes failure as soon as it is realized.

  3. Cough reflex sensitivity is increased in the guinea pig model of allergic rhinitis.

    Science.gov (United States)

    Brozmanova, M; Plevkova, J; Tatar, M; Kollarik, M

    2008-12-01

    Increased cough reflex sensitivity is found in patients with allergic rhinitis and may contribute to cough caused by rhinitis. We have reported that cough to citric acid is enhanced in the guinea pig model of allergic rhinitis. Here we address the hypothesis that the cough reflex sensitivity is increased in this model. The data from our previous studies were analyzed for the cough reflex sensitivity. The allergic inflammation in the nose was induced by repeated intranasal instillations of ovalbumin in the ovalbumin-sensitized guinea pigs. Cough was induced by inhalation of doubling concentrations of citric acid (0.05-1.6 M). Cough threshold was defined as the lowest concentration of citric acid causing two coughs (C2, expressed as geometric mean [95% confidence interval]). We found that the cough threshold was reduced in animals with allergic rhinitis. C2 was 0.5 M [0.36-0.71 M] and 0.15 M [0.1-0.23 M] prior and after repeated intranasal instillations of ovalbumin, respectively, Preflex sensitivity. C2 was reduced in animals with allergic rhinitis treated orally with vehicle (0.57 M [0.28-1.1] vs. 0.09 M [0.04-0.2 M], Preflex sensitivity is increased in the guinea pig model of allergic rhinitis. Our results suggest that guinea pig is a suitable model for mechanistic studies of increased cough reflex sensitivity in rhinitis.

  4. Climate and climate change sensitivity to model configuration in the Canadian RCM over North America

    Energy Technology Data Exchange (ETDEWEB)

    De Elia, Ramon [Ouranos Consortium on Regional Climate and Adaptation to Climate Change, Montreal (Canada); Centre ESCER, Univ. du Quebec a Montreal (Canada); Cote, Helene [Ouranos Consortium on Regional Climate and Adaptation to Climate Change, Montreal (Canada)

    2010-06-15

    Climate simulations performed with Regional Climate Models (RCMs) have been found to show sensitivity to parameter settings. The origin, consequences and interpretations of this sensitivity are varied, but it is generally accepted that sensitivity studies are very important for a better understanding and a more cautious manipulation of RCM results. In this work we present sensitivity experiments performed on the simulated climate produced by the Canadian Regional Climate Model (CRCM). In addition to climate sensitivity to parameter variation, we analyse the impact of the sensitivity on the climate change signal simulated by the CRCM. These studies are performed on 30-year long simulated present and future seasonal climates, and we have analysed the effect of seven kinds of configuration modifications: CRCM initial conditions, lateral boundary condition (LBC), nesting update interval, driving Global Climate Model (GCM), driving GCM member, large-scale spectral nudging, CRCM version, and domain size. Results show that large changes in both the driving model and the CRCM physics seem to be the main sources of sensitivity for the simulated climate and the climate change. Their effects dominate those of configuration issues, such as the use or not of large-scale nudging, domain size, or LBC update interval. Results suggest that in most cases, differences between simulated climates for different CRCM configurations are not transferred to the estimated climate change signal: in general, these tend to cancel each other out. (orig.)

  5. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    Science.gov (United States)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  6. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer Hougaard

    2012-01-01

    Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...... the underlying neural encoding of an experiment defining multiple brain states. In this relation there is a great desire for the researcher to generate brain maps, that highlight brain locations of importance to the classifiers decisions. Based on sensitivity analysis, we develop further procedures for model...... direction the individual locations influence the classification. We illustrate the visualization procedure on a real data from a simple functional magnetic resonance imaging experiment....

  7. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2018-01-01

    to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...

  8. Sensitivity of growth characteristics of tidal sand ridges and long bed waves to formulations of bed shear stress, sand transport and tidal forcing : A numerical model study

    NARCIS (Netherlands)

    Yuan, Bing; de Swart, Huib E.; Panadès, Carles

    2016-01-01

    Tidal sand ridges and long bed waves are large-scale bedforms that are observed on continental shelves. They differ in their wavelength and in their orientation with respect to the principal direction of tidal currents. Previous studies indicate that tidal sand ridges appear in areas where tidal

  9. A sensitivity study of parameters in the Nazaroff-Cass IAQ model with respect to indoor concentrations of O3, NO, NO2

    International Nuclear Information System (INIS)

    Drakou, G.; Zerefos, C.; Ziomas, I.

    2000-01-01

    The indoor O 3 , NO, NO 2 concentrations and their corresponding indoor/outdoor (I/O) concentration ratios are predicted in this paper for some representative buildings, using the Nazaroff-Cass indoor air quality models. This paper presents and systemises the relationship between indoor and air pollution concentrations and the buildings' design, use and operation. The building parameters which are determined to be main factors affecting the air pollutant concentrations are: the physical dimensions of the building and the materials of construction, the buildings' air exchange rate with outdoors and the indoor air pollutant sources. Changes of ultraviolet photon fluxes, of temperatures and of relative humidity indoors, have little effect on indoor O 3 , NO and NO 2 concentrations, for air exchange rates above 0.5 ach. Special attention must be given when a building has a very low air exchange rate, under which conditions the effect of a small change in any of the factors determining the indoor air quality of the building will be much more noticeable than in a building with high air exchange rate. (Author)

  10. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    Science.gov (United States)

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  11. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  12. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  13. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  14. Wind climate estimation using WRF model output: method and model sensitivities over the sea

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Vincent, Claire Louise; Peña, Alfredo

    2015-01-01

    setup parameters. The results of the year-long sensitivity simulations show that the long-term mean wind speed simulated by the WRF model offshore in the region studied is quite insensitive to the global reanalysis, the number of vertical levels, and the horizontal resolution of the sea surface...... temperature used as lower boundary conditions. Also, the strength and form (grid vs spectral) of the nudging is quite irrelevant for the mean wind speed at 100 m. Large sensitivity is found to the choice of boundary layer parametrization, and to the length of the period that is discarded as spin-up to produce...... a wind climatology. It is found that the spin-up period for the boundary layer winds is likely larger than 12 h over land and could affect the wind climatology for points offshore for quite a distance downstream from the coast....

  15. Anti-deuteron sensitivity studies at LHCb

    CERN Multimedia

    Baker, Sophie Katherine

    2018-01-01

    Measurements of anti-deuterons in collider experiments can help to reduce systematic uncertainties in indirect searches for dark matter. Two predominant unknowns in these searches are the production of secondary anti-deuterons in the cosmos from spallation processes, and anti-deuteron production from annihilating dark matter. LHCb is a forward spectrometer on the LHC ring, designed to measure b-hadron decays from high energy proton-proton collisions. With the detector's excellent particle identification capabilities, deuteron and anti-deuteron measurements at LHCb could help to parametrise the two cosmological processes. Recent studies of (anti-)deuteron identification at LHCb and the prospects for measuring prompt (anti-)deuterons from pp-collisions will be presented, as well as a working analysis of b-baryrons decaying to deuterons.

  16. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  17. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  18. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  19. Sensitivity analysis of hydraulic fracturing Using an extended finite element method for the PKN model

    NARCIS (Netherlands)

    Garikapati, Hasini; Verhoosel, Clemens V.; van Brummelen, Harald; Diez, Pedro; Papadrakakis, M.; Papadopoulos, V.; Stefanou, G.; Plevris, V.

    2016-01-01

    Hydraulic fracturing is a process that is surrounded by uncertainty, as available data on e.g. rock formations is scant and available models are still rudimentary. In this contribution sensitivity analysis is carried out as first step in studying the uncertainties in the model. This is done to

  20. Modeling and sensitivity analysis of consensus algorithm based distributed hierarchical control for dc microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2015-01-01

    of dynamic study. The aim of this paper is to model the complete DC microgrid system in z-domain and perform sensitivity analysis for the complete system. A generalized modeling method is proposed and the system dynamics under different control parameters, communication topologies and communication speed...

  1. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  2. Depressive symptoms, insulin sensitivity and insulin secretion in the RISC cohort study

    DEFF Research Database (Denmark)

    Bot, M; Pouwer, F; De Jonge, P

    2013-01-01

    Sensitivity and Cardiovascular Disease Risk (RISC) study. Presence of significant depressive symptoms was defined as a Center for Epidemiologic Studies Depression Scale (CES-D) score ≥ 16. Standard oral glucose tolerance tests were performed. Insulin sensitivity was assessed with the oral glucose insulin......AIM: This study explored the association of depressive symptoms with indices of insulin sensitivity and insulin secretion in a cohort of non-diabetic men and women aged 30 to 64 years. METHODS: The study population was derived from the 3-year follow-up of the Relationship between Insulin...... sensitivity (OGIS) index. Insulin secretion was estimated using three model-based parameters of insulin secretion (beta-cell glucose sensitivity, the potentiation factor ratio, and beta-cell rate sensitivity). RESULTS: A total of 162 out of 1027 participants (16%) had significant depressive symptoms. Having...

  3. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  4. Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions

    Directory of Open Access Journals (Sweden)

    Sebastian Bab

    2012-04-01

    Full Text Available The concept of dynamic coalitions (also virtual organizations describes the temporary interconnection of autonomous agents, who share information or resources in order to achieve a common goal. Through modern technologies these coalitions may form across company, organization and system borders. Therefor questions of access control and security are of vital significance for the architectures supporting these coalitions. In this paper, we present our first steps to reach a formal framework for modeling and verifying the design of privacy-sensitive dynamic coalition infrastructures and their processes. In order to do so we extend existing dynamic coalition modeling approaches with an access-control-concept, which manages access to information through policies. Furthermore we regard the processes underlying these coalitions and present first works in formalizing these processes. As a result of the present paper we illustrate the usefulness of the Abstract State Machine (ASM method for this task. We demonstrate a formal treatment of privacy-sensitive dynamic coalitions by two example ASMs which model certain access control situations. A logical consideration of these ASMs can lead to a better understanding and a verification of the ASMs according to the aspired specification.

  5. Sensitivity analysis on the model to the DO and BODc of the Almendares river

    International Nuclear Information System (INIS)

    Dominguez, J.; Borroto, J.; Hernandez, A.

    2004-01-01

    In the present work, the sensitivity analysis of the model was done, to compare and evaluate the influence of the kinetic coefficients and other parameters, on the DO and BODc. The effect of the BODc and the DO which the river arrives to the studied zone, the influence of the BDO of the discharges and the flow rate, on the DO was modeled. The sensitivity analysis is the base for developing a calibration optimization procedure of the Streeter Phelps model, in order to make easier the process and to increase the precision of predictions. In the other hand, it will contribute to the definition of the strategies to improve river water quality

  6. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  7. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  8. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  9. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    Science.gov (United States)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  10. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    Science.gov (United States)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  11. Sensitivity analysis in the WWTP modelling community – new opportunities and applications

    DEFF Research Database (Denmark)

    Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.

    2010-01-01

    design (BSM1 plant layout) using Standardized Regression Coefficients (SRC) and (ii) Applying sensitivity analysis to help fine-tuning a fuzzy controller for a BNPR plant using Morris Screening. The results obtained from each case study are then critically discussed in view of practical applications......A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...

  12. An approach to measure parameter sensitivity in watershed hydrologic modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...

  13. Particle transport model sensitivity on wave-induced processes

    Science.gov (United States)

    Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna

    2017-04-01

    Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.

  14. On sensitivity value of pair-matched observational studies

    OpenAIRE

    Zhao, Qingyuan

    2017-01-01

    An observational study may be biased for estimating causal effects by failing to control for unmeasured confounders. This paper proposes a new quantity called the "sensitivity value", which is defined as the minimum strength of unmeasured confounders needed to change the qualitative conclusions of a naive analysis assuming no unmeasured confounder. We establish the asymptotic normality of the sensitivity value in pair-matched observational studies. The theoretical results are then used to app...

  15. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  16. [Application of Fourier amplitude sensitivity test in Chinese healthy volunteer population pharmacokinetic model of tacrolimus].

    Science.gov (United States)

    Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei

    2010-07-01

    In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.

  17. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  18. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    Science.gov (United States)

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  20. An Animal Model of Trichloroethylene-Induced Skin Sensitization in BALB/c Mice.

    Science.gov (United States)

    Wang, Hui; Zhang, Jia-xiang; Li, Shu-long; Wang, Feng; Zha, Wan-sheng; Shen, Tong; Wu, Changhao; Zhu, Qi-xing

    2015-01-01

    Trichloroethylene (TCE) is a major occupational hazard and environmental contaminant that can cause multisystem disorders in the form of occupational medicamentosa-like dermatitis. Development of dermatitis involves several proinflammatory cytokines, but their role in TCE-mediated dermatitis has not been examined in a well-defined experimental model. In addition, few animal models of TCE sensitization are available, and the current guinea pig model has apparent limitations. This study aimed to establish a model of TCE-induced skin sensitization in BALB/c mice and to examine the role of several key inflammatory cytokines on TCE sensitization. The sensitization rate of dorsal painted group was 38.3%. Skin edema and erythema occurred in TCE-sensitized groups, as seen in 2,4-dinitrochlorobenzene (DNCB) positive control. Trichloroethylene sensitization-positive (dermatitis [+]) group exhibited increased thickness of epidermis, inflammatory cell infiltration, swelling, and necrosis in dermis and around hair follicle, but ear painted group did not show these histological changes. The concentrations of serum proinflammatory cytokines including tumor necrosis factor (TNF)-α, interferon (IFN)-γ, and interleukin (IL)-2 were significantly increased in 24, 48, and 72 hours dermatitis [+] groups treated with TCE and peaked at 72 hours. Deposition of TNF-α, IFN-γ, and IL-2 into the skin tissue was also revealed by immunohistochemistry. We have established a new animal model of skin sensitization induced by repeated TCE stimulations, and we provide the first evidence that key proinflammatory cytokines including TNF-α, IFN-γ, and IL-2 play an important role in the process of TCE sensitization. © The Author(s) 2015.

  1. Sensitivity analysis of specific activity model parameters for environmental transport of 3H and dose assessment

    International Nuclear Information System (INIS)

    Rout, S.; Mishra, D.G.; Ravi, P.M.; Tripathi, R.M.

    2016-01-01

    Tritium is one of the radionuclides likely to get released to the environment from Pressurized Heavy Water Reactors. Environmental models are extensively used to quantify the complex environmental transport processes of radionuclides and also to assess the impact to the environment. Model parameters exerting the significant influence on model results are identified through a sensitivity analysis (SA). SA is the study of how the variation (uncertainty) in the output of a mathematical model can be apportioned, qualitatively or quantitatively, to different sources of variation in the input parameters. This study was designed to identify the sensitive model parameters of specific activity model (TRS 1616, IAEA) for environmental transfer of 3 H following release to air and then to vegetation and animal products. Model includes parameters such as air to soil transfer factor (CRs), Tissue Free Water 3 H to Organically Bound 3 H ratio (Rp), Relative humidity (RH), WCP (fractional water content) and WEQp (water equivalent factor) any change in these parameters leads to change in 3 H level in vegetation and animal products consequently change in dose due to ingestion. All these parameters are function of climate and/or plant which change with time, space and species. Estimation of these parameters at every time is a time consuming and also required sophisticated instrumentation. Therefore it is necessary to identify the sensitive parameters and freeze the values of least sensitive parameters at constant values for more accurate estimation of 3 H dose in short time for routine assessment

  2. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja

    2015-01-01

    Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  4. Sensitivity and Interaction Analysis Based on Sobol’ Method and Its Application in a Distributed Flood Forecasting Model

    Directory of Open Access Journals (Sweden)

    Hui Wan

    2015-06-01

    Full Text Available Sensitivity analysis is a fundamental approach to identify the most significant and sensitive parameters, helping us to understand complex hydrological models, particularly for time-consuming distributed flood forecasting models based on complicated theory with numerous parameters. Based on Sobol’ method, this study compared the sensitivity and interactions of distributed flood forecasting model parameters with and without accounting for correlation. Four objective functions: (1 Nash–Sutcliffe efficiency (ENS; (2 water balance coefficient (WB; (3 peak discharge efficiency (EP; and (4 time to peak efficiency (ETP were implemented to the Liuxihe model with hourly rainfall-runoff data collected in the Nanhua Creek catchment, Pearl River, China. Contrastive results for the sensitivity and interaction analysis were also illustrated among small, medium, and large flood magnitudes. Results demonstrated that the choice of objective functions had no effect on the sensitivity classification, while it had great influence on the sensitivity ranking for both uncorrelated and correlated cases. The Liuxihe model behaved and responded uniquely to various flood conditions. The results also indicated that the pairwise parameters interactions revealed a non-ignorable contribution to the model output variance. Parameters with high first or total order sensitivity indices presented a corresponding high second order sensitivity indices and correlation coefficients with other parameters. Without considering parameter correlations, the variance contributions of highly sensitive parameters might be underestimated and those of normally sensitive parameters might be overestimated. This research laid a basic foundation to improve the understanding of complex model behavior.

  5. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  6. A diagnostic model for the detection of sensitization to wheat allergens was developed and validated in bakery workers

    NARCIS (Netherlands)

    Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G.; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert

    Objectives: To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. Study Design and Setting: The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate

  7. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Sensitivity of using blunt and sharp crack models in elastic-plastic fracture mechanics

    International Nuclear Information System (INIS)

    Pan, Y.C.; Kennedy, J.M.; Marchertas, A.H.

    1985-01-01

    J-integral values are calculated for both the blunt (smeared) crack and the sharp (discrete) crack models in elastic-plastic fracture mechanics problems involving metallic materials. A sensitivity study is performed to show the relative strengths and weaknesses of the two cracking models. It is concluded that the blunt crack model is less dependent on the orientation of the mesh. For the mesh which is in line with the crack direction, however, the sharp crack model is less sensitive to the mesh size. Both models yield reasonable results for a properly discretized finite-element mesh. A subcycling technique is used in this study in the explicit integration scheme so that large time steps can be used for the coarse elements away from the crack tip. The savings of computation time by this technique are reported. 6 refs., 9 figs

  9. Computational modeling and sensitivity in uniform DT burn

    International Nuclear Information System (INIS)

    Hansen, Jon; Hryniw, Natalia; Kesler, Leigh A.; Li, Frank; Vold, Erik

    2010-01-01

    Understanding deuterium-tritium (DT) fusion is essential to achieving ignition in inertial confinement fusion. A burning DT plasma in a three temperature (3T) approximation and uniform in space is modeled as a system of five non-linear coupled ODEs. Special focus is given to the effects of Compton coupling, Planck opacity, and electron-ion coupling terms. Semi-implicit differencing is used to solve the system of equations. Time step size is varied to examine the stability and convergence of each solution. Data from NDI, SESAME, and TOPS databases is extracted to create analytic fits for the reaction rate parameter, the Planck opacity, and the coupling frequencies of the plasma temperatures. The impact of different high order fits to NDI date (the reaction rate parameter), and using TOPS versus SESAME opacity data is explored, and the sensitivity to several physics parameters in the coupling terms are also examined. The base model recovers the accepted 3T results for the temperature and burn histories. The Compton coupling is found to have a significant impact on the results. Varying a coefficient of this term shows that the model results can give reasonably good agreement with the peak temperatures reported in multi-group results as well as the accepted 3T results. The base model assumes a molar density of 1 mol/cm 3 , as well as a 5 keV intial temperature for all three temperatures. Different intial conditions are explored as well. Intial temperatures are set to 1 and 3 keV, the ratio of D to T is varied (2 and 3 as opposed to 1 in the base model), and densities are set to 10 mol/cm 3 and 100 mol/cm 3 . Again varying the Compton coefficient, the ion temperature results in the higher density case are in reasonable agreement with a recently published kinetic model.

  10. Computational modeling and sensitivity in uniform DT burn

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jon [Los Alamos National Laboratory; Hryniw, Natalia [Los Alamos National Laboratory; Kesler, Leigh A [Los Alamos National Laboratory; Li, Frank [Los Alamos National Laboratory; Vold, Erik [Los Alamos National Laboratory

    2010-01-01

    Understanding deuterium-tritium (DT) fusion is essential to achieving ignition in inertial confinement fusion. A burning DT plasma in a three temperature (3T) approximation and uniform in space is modeled as a system of five non-linear coupled ODEs. Special focus is given to the effects of Compton coupling, Planck opacity, and electron-ion coupling terms. Semi-implicit differencing is used to solve the system of equations. Time step size is varied to examine the stability and convergence of each solution. Data from NDI, SESAME, and TOPS databases is extracted to create analytic fits for the reaction rate parameter, the Planck opacity, and the coupling frequencies of the plasma temperatures. The impact of different high order fits to NDI date (the reaction rate parameter), and using TOPS versus SESAME opacity data is explored, and the sensitivity to several physics parameters in the coupling terms are also examined. The base model recovers the accepted 3T results for the temperature and burn histories. The Compton coupling is found to have a significant impact on the results. Varying a coefficient of this term shows that the model results can give reasonably good agreement with the peak temperatures reported in multi-group results as well as the accepted 3T results. The base model assumes a molar density of 1 mol/cm{sup 3}, as well as a 5 keV intial temperature for all three temperatures. Different intial conditions are explored as well. Intial temperatures are set to 1 and 3 keV, the ratio of D to T is varied (2 and 3 as opposed to 1 in the base model), and densities are set to 10 mol/cm{sup 3} and 100 mol/cm{sup 3}. Again varying the Compton coefficient, the ion temperature results in the higher density case are in reasonable agreement with a recently published kinetic model.

  11. Use of FFTBM by signal mirroring for sensitivity study

    International Nuclear Information System (INIS)

    Prošek, Andrej; Leskovar, Matjaž

    2015-01-01

    Highlights: • The fast Fourier transform based tool was applied for a sensitivity analysis. • The calculations of the BEMUSE programme LOFT L2-5 test were used in the study. • The most influential input parameters were identified and their influence quantified. • It was shown that FFTBM-SM is best suited for conducting quick sensitivity analyses. - Abstract: The state of the art best estimate safety analyses for nuclear reactors use best estimate thermal–hydraulic computer codes with an evaluation of the uncertainties to compare the results of calculations with acceptance criteria. The uncertainty quantification is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to the uncertainty is determined. The objective of the performed study is to demonstrate that the fast Fourier transform based method by signal mirroring (FFTBM-SM) can be very efficiently used for the sensitivity analysis when one parameter is varied at a time. The sensitivity study was conducted for the LOFT L2-5 test, which simulates the large break loss of coolant accident. The LOFT L2-5 test was analysed in the frame of the Organisation for Economic Co-operation and Development (OECD) Best Estimate Methods – Uncertainty and Sensitivity Evaluation (BEMUSE) programme, where each of the 14 participants performed a reference calculation and up to 15 sensitivity runs of the test. The results show that with the FFTBM-SM the analyst can get the time dependent picture of the input parameter influence on the results. The results suggest that FFTBM-SM is especially appropriate for a sensitivity analysis in which several calculations need to be compared

  12. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Intercultural Sensitivity through Short-Term Study Abroad

    Science.gov (United States)

    Bloom, Melanie; Miranda, Arturo

    2015-01-01

    One of the foremost-cited rationales for study abroad during college is the development of a global perspective and intercultural sensitivity. Although this argument is mentioned frequently in promotional materials for study abroad, it has not yet been backed by research based on the outcomes of students' study abroad experiences. As more…

  14. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  15. Model of urban water management towards water sensitive city: a literature review

    Science.gov (United States)

    Maftuhah, D. I.; Anityasari, M.; Sholihah, M.

    2018-04-01

    Nowadays, many cities are facing with complex issues such as climate change, social, economic, culture, and environmental problems, especially urban water. In other words, the city has to struggle with the challenge to make sure its sustainability in all aspects. This research focuses on how to ensure the city sustainability and resilience on urban water management. Many research were not only conducted in urban water management, but also in sustainability itself. Moreover, water sustainability shifts from urban water management into water sensitive city. This transition needs comprehensive aspects such as social, institutional dynamics, technical innovation, and local contents. Some literatures about model of urban water management and the transition towards water sensitivity had been reviewed in this study. This study proposed discussion about model of urban water management and the transition towards water sensitive city. Research findings suggest that there are many different models developed in urban water management, but they are not comprehensive yet and only few studies discuss about the transition towards water sensitive and resilience city. The drawbacks of previous research can identify and fulfill the gap of this study. Therefore, the paper contributes a general framework for the urban water management modelling studies.

  16. Ethical problems and moral sensitivity in physiotherapy: a descriptive study.

    Science.gov (United States)

    Kulju, Kati; Suhonen, Riitta; Leino-Kilpi, Helena

    2013-08-01

    This study identified and described ethical problems encountered by physiotherapists in their practice and physiotherapists' moral sensitivity in ethical situations. A questionnaire-based survey was constructed to identify ethical problems, and the Moral Sensitivity Questionnaire Revised version was used to measure moral sensitivity. Physiotherapists (n = 116) working in public health services responded to the questionnaire. Based on the results, most of the physiotherapists encounter ethical problems weekly. They concern mainly financial considerations, equality and justice, professionalism, unethical conduct of physiotherapists or other professions and patients' self-determination. The dimension of moral strength was emphasised in physiotherapists' self-evaluations of their moral sensitivity. As a conclusion, ethical problems do occur not only at individual level but also at organisational and society level. Physiotherapists seem to have moral strength for speaking on behalf of the patient. Scarce resources make them feel insufficient but much could still be done to provide quality care in co-operation with other health-care professionals.

  17. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  18. Uncertainty and sensitivity analysis of biokinetic models for radiopharmaceuticals used in nuclear medicine

    International Nuclear Information System (INIS)

    Li, W. B.; Hoeschen, C.

    2010-01-01

    Mathematical models for kinetics of radiopharmaceuticals in humans were developed and are used to estimate the radiation absorbed dose for patients in nuclear medicine by the International Commission on Radiological Protection and the Medical Internal Radiation Dose (MIRD) Committee. However, due to the fact that the residence times used were derived from different subjects, partially even with different ethnic backgrounds, a large variation in the model parameters propagates to a high uncertainty of the dose estimation. In this work, a method was developed for analysing the uncertainty and sensitivity of biokinetic models that are used to calculate the residence times. The biokinetic model of 18 F-FDG (FDG) developed by the MIRD Committee was analysed by this developed method. The sources of uncertainty of all model parameters were evaluated based on the experiments. The Latin hypercube sampling technique was used to sample the parameters for model input. Kinetic modelling of FDG in humans was performed. Sensitivity of model parameters was indicated by combining the model input and output, using regression and partial correlation analysis. The transfer rate parameter of plasma to other tissue fast is the parameter with the greatest influence on the residence time of plasma. Optimisation of biokinetic data acquisition in the clinical practice by exploitation of the sensitivity of model parameters obtained in this study is discussed. (authors)

  19. A sensitive venous bleeding model in haemophilia A mice

    DEFF Research Database (Denmark)

    Pastoft, Anne Engedahl; Lykkesfeldt, Jens; Ezban, M.

    2012-01-01

    Haemostatic effect of compounds for treating haemophilia can be evaluated in various bleeding models in haemophilic mice. However, the doses of factor VIII (FVIII) for normalizing bleeding used in some of these models are reported to be relatively high. The aim of this study was to establish a se...

  20. Application of sensitivity analysis in nuclear power plant probabilistic risk assessment studies

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1986-01-01

    Nuclear power plant probabilistic risk assessment (PRA) studies utilise many models, simplifications and assumptions. Also subjective judgement is widely applied due to lack of actual data. This results in significant uncertainties. Three general types of uncertainties have been identified: (1) parameter uncertainties, (2) modelling uncertainties, and (3) completeness uncertainties. The significance of some of the modelling assumptions and simplifications cannot be investigated by assignment and propagation of parameter uncertainties. In such cases the impact of different options may (and should) be studied by performing sensitivity analyses, which concentrate on the most critical elements. This paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PRA. Sensitivity analyses are performed with respect to: (1) boundary conditions (success criteria, credit for non-safety systems, degree of detail in modelling of support functions), (2) operator actions, (3) treatment of common cause failures (CCFs). The items of main interest are continuously identified in the course of performing a PRA study, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PRA study. The critical importance of modelling assumptions is also demonstrated by implementation of some modelling features from another level 1 PRA into the reference model. It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (author)

  1. Short ensembles: An Efficient Method for Discerning Climate-relevant Sensitivities in Atmospheric General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun

    2014-09-08

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.

  2. Revisiting the radionuclide atmospheric dispersion event of the Chernobyl disaster - modelling sensitivity and data assimilation

    Science.gov (United States)

    Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor

    2013-04-01

    A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be

  3. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.; Navarro, Marí a; Merks, Roeland M. H.; Blom, Joke G.

    2015-01-01

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand

  4. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    Science.gov (United States)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  5. Application of Uncertainty and Sensitivity Analysis to a Kinetic Model for Enzymatic Biodiesel Production

    DEFF Research Database (Denmark)

    Price, Jason Anthony; Nordblad, Mathias; Woodley, John

    2014-01-01

    This paper demonstrates the added benefits of using uncertainty and sensitivity analysis in the kinetics of enzymatic biodiesel production. For this study, a kinetic model by Fedosov and co-workers is used. For the uncertainty analysis the Monte Carlo procedure was used to statistically quantify...

  6. Investigations of sensitivity and resolution of ECG and MCG in a realistically shaped thorax model

    International Nuclear Information System (INIS)

    Mäntynen, Ville; Konttila, Teijo; Stenroos, Matti

    2014-01-01

    Solving the inverse problem of electrocardiography (ECG) and magnetocardiography (MCG) is often referred to as cardiac source imaging. Spatial properties of ECG and MCG as imaging systems are, however, not well known. In this modelling study, we investigate the sensitivity and point-spread function (PSF) of ECG, MCG, and combined ECG+MCG as a function of source position and orientation, globally around the ventricles: signal topographies are modelled using a realistically-shaped volume conductor model, and the inverse problem is solved using a distributed source model and linear source estimation with minimal use of prior information. The results show that the sensitivity depends not only on the modality but also on the location and orientation of the source and that the sensitivity distribution is clearly reflected in the PSF. MCG can better characterize tangential anterior sources (with respect to the heart surface), while ECG excels with normally-oriented and posterior sources. Compared to either modality used alone, the sensitivity of combined ECG+MCG is less dependent on source orientation per source location, leading to better source estimates. Thus, for maximal sensitivity and optimal source estimation, the electric and magnetic measurements should be combined. (paper)

  7. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  8. Seismic hazard analysis. Application of methodology, results, and sensitivity studies

    International Nuclear Information System (INIS)

    Bernreuter, D.L.

    1981-10-01

    As part of the Site Specific Spectra Project, this report seeks to identify the sources of and minimize uncertainty in estimates of seismic hazards in the Eastern United States. Findings are being used by the Nuclear Regulatory Commission to develop a synthesis among various methods that can be used in evaluating seismic hazard at the various plants in the Eastern United States. In this volume, one of a five-volume series, we discuss the application of the probabilistic approach using expert opinion. The seismic hazard is developed at nine sites in the Central and Northeastern United States, and both individual experts' and synthesis results are obtained. We also discuss and evaluate the ground motion models used to develop the seismic hazard at the various sites, analyzing extensive sensitivity studies to determine the important parameters and the significance of uncertainty in them. Comparisons are made between probabilistic and real spectra for a number of Eastern earthquakes. The uncertainty in the real spectra is examined as a function of the key earthquake source parameters. In our opinion, the single most important conclusion of this study is that the use of expert opinion to supplement the sparse data available on Eastern United States earthquakes is a viable approach for determining estimated seismic hazard in this region of the country. (author)

  9. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  10. Short ensembles: an efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Directory of Open Access Journals (Sweden)

    H. Wan

    2014-09-01

    Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of

  11. Reproducibility of the heat/capsaicin skin sensitization model in healthy volunteers

    Directory of Open Access Journals (Sweden)

    Cavallone LF

    2013-11-01

    Full Text Available Laura F Cavallone,1 Karen Frey,1 Michael C Montana,1 Jeremy Joyal,1 Karen J Regina,1 Karin L Petersen,2 Robert W Gereau IV11Department of Anesthesiology, Washington University in St Louis, School of Medicine, St Louis, MO, USA; 2California Pacific Medical Center Research Institute, San Francisco, CA, USAIntroduction: Heat/capsaicin skin sensitization is a well-characterized human experimental model to induce hyperalgesia and allodynia. Using this model, gabapentin, among other drugs, was shown to significantly reduce cutaneous hyperalgesia compared to placebo. Since the larger thermal probes used in the original studies to produce heat sensitization are now commercially unavailable, we decided to assess whether previous findings could be replicated with a currently available smaller probe (heated area 9 cm2 versus 12.5–15.7 cm2.Study design and methods: After Institutional Review Board approval, 15 adult healthy volunteers participated in two study sessions, scheduled 1 week apart (Part A. In both sessions, subjects were exposed to the heat/capsaicin cutaneous sensitization model. Areas of hypersensitivity to brush stroke and von Frey (VF filament stimulation were measured at baseline and after rekindling of skin sensitization. Another group of 15 volunteers was exposed to an identical schedule and set of sensitization procedures, but, in each session, received either gabapentin or placebo (Part B.Results: Unlike previous reports, a similar reduction of areas of hyperalgesia was observed in all groups/sessions. Fading of areas of hyperalgesia over time was observed in Part A. In Part B, there was no difference in area reduction after gabapentin compared to placebo.Conclusion: When using smaller thermal probes than originally proposed, modifications of other parameters of sensitization and/or rekindling process may be needed to allow the heat/capsaicin sensitization protocol to be used as initially intended. Standardization and validation of

  12. Aspartame sensitivity? A double blind randomised crossover study.

    Directory of Open Access Journals (Sweden)

    Thozhukat Sathyapalan

    Full Text Available Aspartame is a commonly used intense artificial sweetener, being approximately 200 times sweeter than sucrose. There have been concerns over aspartame since approval in the 1980s including a large anecdotal database reporting severe symptoms. The objective of this study was to compare the acute symptom effects of aspartame to a control preparation.This was a double-blind randomized cross over study conducted in a clinical research unit in United Kingdom. Forty-eight individual who has self reported sensitivity to aspartame were compared to 48 age and gender matched aspartame non-sensitive individuals. They were given aspartame (100mg-containing or control snack bars randomly at least 7 days apart. The main outcome measures were acute effects of aspartame measured using repeated ratings of 14 symptoms, biochemistry and metabonomics.Aspartame sensitive and non-sensitive participants differed psychologically at baseline in handling feelings and perceived stress. Sensitive participants had higher triglycerides (2.05 ± 1.44 vs. 1.26 ± 0.84mmol/L; p value 0.008 and lower HDL-C (1.16 ± 0.34 vs. 1.35 ± 0.54 mmol/L; p value 0.04, reflected in 1H NMR serum analysis that showed differences in the baseline lipid content between the two groups. Urine metabonomic studies showed no significant differences. None of the rated symptoms differed between aspartame and control bars, or between sensitive and control participants. However, aspartame sensitive participants rated more symptoms particularly in the first test session, whether this was placebo or control. Aspartame and control bars affected GLP-1, GIP, tyrosine and phenylalanine levels equally in both aspartame sensitive and non-sensitive subjects.Using a comprehensive battery of psychological tests, biochemistry and state of the art metabonomics there was no evidence of any acute adverse responses to aspartame. This independent study gives reassurance to both regulatory bodies and the public that

  13. Aspartame sensitivity? A double blind randomised crossover study.

    Science.gov (United States)

    Sathyapalan, Thozhukat; Thatcher, Natalie J; Hammersley, Richard; Rigby, Alan S; Courts, Fraser L; Pechlivanis, Alexandros; Gooderham, Nigel J; Holmes, Elaine; le Roux, Carel W; Atkin, Stephen L

    2015-01-01

    Aspartame is a commonly used intense artificial sweetener, being approximately 200 times sweeter than sucrose. There have been concerns over aspartame since approval in the 1980s including a large anecdotal database reporting severe symptoms. The objective of this study was to compare the acute symptom effects of aspartame to a control preparation. This was a double-blind randomized cross over study conducted in a clinical research unit in United Kingdom. Forty-eight individual who has self reported sensitivity to aspartame were compared to 48 age and gender matched aspartame non-sensitive individuals. They were given aspartame (100mg)-containing or control snack bars randomly at least 7 days apart. The main outcome measures were acute effects of aspartame measured using repeated ratings of 14 symptoms, biochemistry and metabonomics. Aspartame sensitive and non-sensitive participants differed psychologically at baseline in handling feelings and perceived stress. Sensitive participants had higher triglycerides (2.05 ± 1.44 vs. 1.26 ± 0.84mmol/L; p value 0.008) and lower HDL-C (1.16 ± 0.34 vs. 1.35 ± 0.54 mmol/L; p value 0.04), reflected in 1H NMR serum analysis that showed differences in the baseline lipid content between the two groups. Urine metabonomic studies showed no significant differences. None of the rated symptoms differed between aspartame and control bars, or between sensitive and control participants. However, aspartame sensitive participants rated more symptoms particularly in the first test session, whether this was placebo or control. Aspartame and control bars affected GLP-1, GIP, tyrosine and phenylalanine levels equally in both aspartame sensitive and non-sensitive subjects. Using a comprehensive battery of psychological tests, biochemistry and state of the art metabonomics there was no evidence of any acute adverse responses to aspartame. This independent study gives reassurance to both regulatory bodies and the public that acute ingestion of

  14. Global sensitivity analysis in wastewater treatment plant model applications: Prioritizing sources of uncertainty

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Neumann, Marc B.

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements...... insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants....... a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R2...

  15. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  16. Basic study on an energy conversion system using boiling two-phase flows of temperature-sensitive magnetic fluid. Theoretical analysis based on thermal nonequilibrium model and flow visualization using ultrasonic echo

    International Nuclear Information System (INIS)

    Ishimoto, Jun; Kamiyama, Shinichi; Okubo, Masaaki.

    1995-01-01

    Effects of magnetic field on the characteristics of boiling two-phase pipe flow of temperature-sensitive magnetic fluid are clarified in detail both theoretically and experimentally. Firstly, governing equations of two-phase magnetic fluid flow based on the thermal nonequilibrium two-fluid model are presented and numerically solved considering evaporation and condensation between gas- and liquid-phases. Next, behaviour of vapor bubbles is visualized with ultrasonic echo in the region of nonuniform magnetic field. This is recorded and processed with an image processor. As a result, the distributions of void fraction in the two-phase flow are obtained. Furthermore, detailed characteristics of the two-phase magnetic fluid flow are investigated using a small test loop of the new energy conversion system. From the numerical and experimental results, it is known that the precise control of the boiling two-phase flow and bubble generation is possible by using the nonuniform magnetic field effectively. These fundamental studies on the characteristics of two-phase magnetic fluid flow will contribute to the development of the new energy conversion system using a gas-liquid boiling two-phase flow of magnetic fluid. (author)

  17. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  18. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  19. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    Science.gov (United States)

    Ely, D. Matthew

    2006-01-01

    routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  20. Sensitivity studies for the main r process: nuclear masses

    Directory of Open Access Journals (Sweden)

    A. Aprahamian

    2014-02-01

    Full Text Available The site of the rapid neutron capture process (r process is one of the open challenges in all of physics today. The r process is thought to be responsible for the creation of more than half of all elements beyond iron. The scientific challenges to understanding the origin of the heavy elements beyond iron lie in both the uncertainties associated with astrophysical conditions that are needed to allow an r process to occur and a vast lack of knowledge about the properties of nuclei far from stability. One way is to disentangle the nuclear and astrophysical components of the question. On the nuclear physics side, there is great global competition to access and measure the most exotic nuclei that existing facilities can reach, while simultaneously building new, more powerful accelerators to make even more exotic nuclei. On the astrophysics side, various astrophysical scenarios for the production of the heaviest elements have been proposed but open questions remain. This paper reports on a sensitivity study of the r process to determine the most crucial nuclear masses to measure using an r-process simulation code, several mass models (FRDM, Duflo-Zuker, and HFB-21, and three potential astrophysical scenarios.

  1. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  2. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  3. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    Science.gov (United States)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  4. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  5. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    International Nuclear Information System (INIS)

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  6. Sensitivity study on hydraulic well testing inversion using simulated annealing

    International Nuclear Information System (INIS)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion

  7. Sensitivity study on hydraulic well testing inversion using simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi

    1997-11-01

    For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.

  8. HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment

    Science.gov (United States)

    Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham

    2013-01-01

    Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.

  9. Sensitivity study of the monogroove with screen heat pipe design

    Science.gov (United States)

    Evans, Austin L.; Joyce, Martin

    1988-01-01

    The present sensitivity study of design variable effects on the performance of a monogroove-with-screen heat pipe obtains performance curves for maximum heat-transfer rates vs. operating temperatures by means of a computer code; performance projections for both 1-g and zero-g conditions are obtainable. The variables in question were liquid and vapor channel design, wall groove design, and the number of feed lines in the evaporator and condenser. The effect on performance of three different working fluids, namely ammonia, methanol, and water, were also determined. Greatest sensitivity was to changes in liquid and vapor channel diameters.

  10. Design tradeoff studies and sensitivity analysis. Appendix B

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-25

    The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)

  11. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  12. Linear Array Ultrasonic Transducers: Sensitivity and Resolution Study

    International Nuclear Information System (INIS)

    Kramb, V.A.

    2005-01-01

    The University of Dayton Research Institute (UDRI) under contract by the US Air Force has designed and integrated a fully automated inspection system for the inspection of turbine engines that incorporates linear phased array ultrasonic transducers. Phased array transducers have been successfully implemented into weld and turbine blade root inspections where the defect types are well known and characterized. Embedded defects in aerospace turbine engine components are less well defined, however. In order to determine the applicability of linear arrays to aerospace inspections the sensitivity of array transducers to embedded defects in engine materials must be characterized. In addition, the implementation of array transducers into legacy inspection procedures must take into account any differences in sensitivity between the array transducer and that of the single element transducer currently used. This paper discusses preliminary results in a study that compares the sensitivity of linear array and conventional single element transducers to synthetic hard alpha defects in a titanium alloy

  13. Sensitivity study of tensions in distribution networks with respect to injected powers

    International Nuclear Information System (INIS)

    Tencio Alfaro, Ernie Fernando

    2013-01-01

    A study of the sensitivity of tension is submitted to small changes of active and reactive power of distributed generators (DG) of a 11 kV radial system of 8 circuits with 75 rods, in which 22 bars with DG and 38 bars with loads. The sensitivities are obtained for 6 load models 3 relations R / X of the lines interconnecting the distributed system, 3 equivalents of Thevenin and high load conditions with low generation and low load with high part of the DG and bars load. The study has obtained to determine which operating conditions of the system have presented the greatest tension sensitivities. A description of the theory of modeling loads and motor is developed for electrical power systems. The several ways to obtain the sensitivity matrix of tension are explained as central axis. (author) [es

  14. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  15. Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet.

    Science.gov (United States)

    Camps-Bossacoma, Mariona; Pérez-Cano, Francisco J; Franch, Àngels; Castell, Margarida

    2017-01-01

    Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.

  16. Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet

    Directory of Open Access Journals (Sweden)

    Mariona Camps-Bossacoma

    2017-01-01

    Full Text Available Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.

  17. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation

    Directory of Open Access Journals (Sweden)

    Alexandre Bryan Heinemann

    2012-01-01

    Full Text Available Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, biomass, leaf area (LAI and total accumulated solar radiation (SRA during the crop cycle. The accuracy of the 5 models for estimated daily solar radiation was similar, and it was not substantially different among sites. For water limited environments (no irrigation, crop model outputs yield, biomass and LAI was not sensitive for the uncertainties in radiation models studied here.

  19. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA.

    Directory of Open Access Journals (Sweden)

    Martin T Schultz

    Full Text Available The environmental DNA (eDNA method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1 collection of a filtered water sample from the source; 2 extraction of DNA from the filter and isolation in a purified elution; 3 removal of aliquots from the elution for use in the polymerase chain reaction (PCR assay; 4 PCR; and 5 genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis and silver carp (H. molitrix assuming sampling protocols used in the Chicago Area Waterway System (CAWS. Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration

  20. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  1. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    Science.gov (United States)

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  2. A global sensitivity analysis approach for morphogenesis models

    NARCIS (Netherlands)

    S.E.M. Boas (Sonja); M.I. Navarro Jimenez (Maria); R.M.H. Merks (Roeland); J.G. Blom (Joke)

    2015-01-01

    textabstract{\\bf Background} %if any Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the

  3. Multivariate Models for Prediction of Skin Sensitization Hazard in Humans

    Science.gov (United States)

    One of ICCVAM’s highest priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary for a substance to elicit a skin sensitization reaction suggests that no single alternative me...

  4. Sensitivity analysis of an individual-based model for simulation of influenza epidemics.

    Directory of Open Access Journals (Sweden)

    Elaine O Nsoesie

    Full Text Available Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty

  5. Adjoint sensitivity studies of loop current and eddy shedding in the Gulf of Mexico

    KAUST Repository

    Gopalakrishnan, Ganesh; Cornuelle, Bruce D.; Hoteit, Ibrahim

    2013-01-01

    Adjoint model sensitivity analyses were applied for the loop current (LC) and its eddy shedding in the Gulf of Mexico (GoM) using the MIT general circulation model (MITgcm). The circulation in the GoM is mainly driven by the energetic LC and subsequent LC eddy separation. In order to understand which ocean regions and features control the evolution of the LC, including anticyclonic warm-core eddy shedding in the GoM, forward and adjoint sensitivities with respect to previous model state and atmospheric forcing were computed using the MITgcm and its adjoint. Since the validity of the adjoint model sensitivities depends on the capability of the forward model to simulate the real LC system and the eddy shedding processes, a 5 year (2004–2008) forward model simulation was performed for the GoM using realistic atmospheric forcing, initial, and boundary conditions. This forward model simulation was compared to satellite measurements of sea-surface height (SSH) and sea-surface temperature (SST), and observed transport variability. Despite realistic mean state, standard deviations, and LC eddy shedding period, the simulated LC extension shows less variability and more regularity than the observations. However, the model is suitable for studying the LC system and can be utilized for examining the ocean influences leading to a simple, and hopefully generic LC eddy separation in the GoM. The adjoint sensitivities of the LC show influences from the Yucatan Channel (YC) flow and Loop Current Frontal Eddy (LCFE) on both LC extension and eddy separation, as suggested by earlier work. Some of the processes that control LC extension after eddy separation differ from those controlling eddy shedding, but include YC through-flow. The sensitivity remains stable for more than 30 days and moves generally upstream, entering the Caribbean Sea. The sensitivities of the LC for SST generally remain closer to the surface and move at speeds consistent with advection by the high-speed core of

  6. Adjoint sensitivity studies of loop current and eddy shedding in the Gulf of Mexico

    KAUST Repository

    Gopalakrishnan, Ganesh

    2013-07-01

    Adjoint model sensitivity analyses were applied for the loop current (LC) and its eddy shedding in the Gulf of Mexico (GoM) using the MIT general circulation model (MITgcm). The circulation in the GoM is mainly driven by the energetic LC and subsequent LC eddy separation. In order to understand which ocean regions and features control the evolution of the LC, including anticyclonic warm-core eddy shedding in the GoM, forward and adjoint sensitivities with respect to previous model state and atmospheric forcing were computed using the MITgcm and its adjoint. Since the validity of the adjoint model sensitivities depends on the capability of the forward model to simulate the real LC system and the eddy shedding processes, a 5 year (2004–2008) forward model simulation was performed for the GoM using realistic atmospheric forcing, initial, and boundary conditions. This forward model simulation was compared to satellite measurements of sea-surface height (SSH) and sea-surface temperature (SST), and observed transport variability. Despite realistic mean state, standard deviations, and LC eddy shedding period, the simulated LC extension shows less variability and more regularity than the observations. However, the model is suitable for studying the LC system and can be utilized for examining the ocean influences leading to a simple, and hopefully generic LC eddy separation in the GoM. The adjoint sensitivities of the LC show influences from the Yucatan Channel (YC) flow and Loop Current Frontal Eddy (LCFE) on both LC extension and eddy separation, as suggested by earlier work. Some of the processes that control LC extension after eddy separation differ from those controlling eddy shedding, but include YC through-flow. The sensitivity remains stable for more than 30 days and moves generally upstream, entering the Caribbean Sea. The sensitivities of the LC for SST generally remain closer to the surface and move at speeds consistent with advection by the high-speed core of

  7. Radiation sensitization studies by silymarin on HCT-15 cells

    International Nuclear Information System (INIS)

    Lal, Mitu; Gupta, Damodar; Arora, R.

    2014-01-01

    Radiotherapy has been widely used for treatment of human cancers. However, cancer cells develop radioresistant phenotypes following multiple exposures to the treatment agent that decrease the efficacy of radiotherapy. Here it was investigated that the radiation sensitization effects of silymarin found in colon cancer. The aim of this study was to investigate mechanisms involved in radiation sensitization growth inhibitory effect of silymarin in combination with radiation, in Human colon carcinoma (HCT-15). The human colon carcinoma was utilized and SRB-assay was performed to study anti-proliferative effect of silymarin in combination with gamma radiation (2 Gy) appropriate radiation dose was optimized and confirmed by clonogenic assay. Microscopic analysis was done by staining with Hoechst-33342, DAPI, Propidium iodide to confirm the presence of apoptosis. Nitric oxide production, changes in lipid peroxidation, Cell cycle analysis were carried out and mitochondrial membrane potential was measured by uptake of cationic dye JC-1 by using flow cytometer. Silymarin in combination with radiation (2 Gy) inhibited 70% ± 5% population growth of HCT-15 cells in time and dose dependent manner. Pre treatment of cells with silymarin for 30 min before radiation was found to be most effective for radiation sensitization. There was 25% increase in levels of nitric oxide as compare to control, whereas 2.5 fold change in lipid peroxidation with respect to control. IR-induced apoptosis in HCT-15 cell line was significantly enhanced by silymarin, as reflected by viability, DNA fragmentation, and mitochondrial dysfunction. Additionally, silymarin in combination with IR is found to be effective in sensitization of HCT-15 cells. In vivo studies on development of tumor and sensitization aspects needs to done in future. (author)

  8. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  9. Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model

    Science.gov (United States)

    Romanou, A.; Romanski, J.; Gregg, W. W.

    2014-01-01

    Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10 %) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which

  10. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  11. The identification of model effective dimensions using global sensitivity analysis

    International Nuclear Information System (INIS)

    Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang

    2011-01-01

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  12. The identification of model effective dimensions using global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)

    2011-04-15

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  13. Influence of selecting secondary settling tank sub-models on the calibration of WWTP models – A global sensitivity analysis using BSM2

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores Alsina, Xavier; Sin, Gürkan

    2014-01-01

    This study investigates the sensitivity of wastewater treatment plant (WWTP) model performance to the selection of one-dimensional secondary settling tanks (1-D SST) models with first-order and second-order mathematical structures. We performed a global sensitivity analysis (GSA) on the benchmark...... simulation model No.2 with the input uncertainty associated to the biokinetic parameters in the activated sludge model No. 1 (ASM1), a fractionation parameter in the primary clarifier, and the settling parameters in the SST model. Based on the parameter sensitivity rankings obtained in this study......, the settling parameters were found to be as influential as the biokinetic parameters on the uncertainty of WWTP model predictions, particularly for biogas production and treated water quality. However, the sensitivity measures were found to be dependent on the 1-D SST models selected. Accordingly, we suggest...

  14. STUDY OF PATHOGENESIS AND ITS SENSITIVITY PATTERN IN UTI

    Directory of Open Access Journals (Sweden)

    Rajendra Prasad Kathula

    2016-06-01

    Full Text Available BACKGROUND Urinary tract infections are common causes of both community acquired and nosocomial infections in adult patients admitted in the hospitals. Urinary tract infections can be defined as the presence of pathogenic bacteria in significant colony count in the bladder of upper urinary tract with its associated consequences. Asymptomatic bacteriuria is a term used to designate urinary tract infections in the absence of symptoms with the growth of bacteria colonies often crossing 1,00,000/mL in a freshly voided midstream urine sample. Urethritis and cystitis are characterised by the inflammation of the urethra and bladder with symptoms of dysuria, frequency and lower pubic pain and it is associated with fever. Acute pyelonephritis is the bacterial infection of renal parenchyma and it is characterised by fever with rigors, flank pain, vomiting, costovertebral tenderness with or without symptoms of cystitis. It may be associated with pus formation. Prostatitis is quiet common and it involves infective inflammation of the prostate associated with dysuria, urgency, frequency and pain in the lower abdomen, perineum, or base of the penis. A sincere effort has been made towards this study on pathogenesis and its sensitivity pattern in UTI. METHODS One hundred cases who visited the Department of Surgery, Government Medical College, Nizamabad were used as the sample size of the study. The plethora of the signs and symptoms which were seen were noted and the mid catch of the urine was done and sent to the Department of Microbiology for the pathogens to be identified. The sensitivity pattern was also studied and reported. The study was done from October 2012 to November 2013. RESULT The most common pathogen was E. coli and the most sensitivity of the commonest pathogen (E. coli was found to be towards Nitrofurantoin. CONCLUSION In this study, the most common pathogens which causes the UTI and the sensitivity pattern has been reported. The study is

  15. Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2

    International Nuclear Information System (INIS)

    Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.

    1985-01-01

    This paper summarizes a demonstration uncertainty/sensitivity analysis performed on the reactor accident consequence model CRAC2. The study was performed with uncertainty/sensitivity analysis techniques compiled as part of the MELCOR program. The principal objectives of the study were: 1) to demonstrate the use of the uncertainty/sensitivity analysis techniques on a health and economic consequence model, 2) to test the computer models which implement the techniques, 3) to identify possible difficulties in performing such an analysis, and 4) to explore alternative means of analyzing, displaying, and describing the results. Demonstration of the applicability of the techniques was the motivation for performing this study; thus, the results should not be taken as a definitive uncertainty analysis of health and economic consequences. Nevertheless, significant insights on health and economic consequence analysis can be drawn from the results of this type of study. Latin hypercube sampling (LHS), a modified Monte Carlo technique, was used in this study. LHS generates a multivariate input structure in which all the variables of interest are varied simultaneously and desired correlations between variables are preserved. LHS has been shown to produce estimates of output distribution functions that are comparable with results of larger random samples

  16. IATA-Bayesian Network Model for Skin Sensitization Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Since the publication of the Adverse Outcome Pathway (AOP) for skin sensitization, there have been many efforts to develop systematic approaches to integrate the...

  17. Multivariate Models for Prediction of Human Skin Sensitization Hazard.

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensiti...

  18. Identifying sensitive areas on intercultural contacts: An exploratory study

    Directory of Open Access Journals (Sweden)

    Ignacio Ramos-Vidal

    2011-06-01

    Full Text Available This paper analyzes the negative influence that cultural friction areas can promote on intercultural contacts. First, we expose the critical incident method like cross-cultural training model (Arthur, 2001. Then we show the negative effects that sensitive cultural zones can exert on the formation of prejudices and stereotypes about culturally diverse groups, analyzing 77 critical incidents collected in two different formative contexts. The main cultural shock areas detected are a intercultural communication barriers, b gender roles, and c the cultural expressions statement. Strategies to improve the method validity are proposed.

  19. Sensitivity studies of a PCV under earthquake loading conditions

    International Nuclear Information System (INIS)

    Maraslioglu, B.; Shamshiri, I.

    1987-01-01

    The results point out the special sensitivity of the modeling and the time history method analyses. Due to the lack of more precise general statements in literature and regulations which should help to find an otpimal design, the analyst is challenged to make much efforts in finding realistic, conservative, not overestimated results. For the purpose of more precise data due to the temporal combination in time history method he has to abandon the security supplied by enveloped, smoothed and broadened spectra in response spectrum method analysis (RSMA). It is therefore advisable to prefer RSMA or to perform several calculations with variations of frequency, ground shear modulus and earthquake loading condition. (orig./HP)

  20. Studies on radiation-sensitive nonsilver halide materials, (2)

    International Nuclear Information System (INIS)

    Komizu, Hideo; Honda, Koichi; Yabe, Akira; Kawasaki, Masami; Yamanaka, Takeshi.

    1980-01-01

    Dye-precursors made from furfural and some aniline derivatives become red-colored upon irradiation with ionization radiatios in PVC matrix, forming Stenhouse salts with the HCl evolved from the matrix. The coloration of the precursor from N-methylaniline, having the most excellent potential for dosimetry among the precursors, was studied for the irradiation of electron beam (60 kV) and X-ray (50 - 240 kVp). The following conclusions were obtained for the electron beam bombardment. (1) The response range is 10 -8 - 10 -6 C/cm 2 or 10 3 - 10 5 rad, where good linearity between coloration and charge density exists. (2) The highest sensitivity is obtained when the concentration of the precursor is >=5 wt% for the amount of PVC and the film thickness is >=32 μm. (3) Addition of 25 wt% of DOP enhances not only the sensitivity by ca. 20% but also the stability of the color from several days to several months. (4) The sensitivity is increased by ca. 15% when a conductive base is used for the film. (5) G value for the formation of the dye, i.e., that for the formation of HCl is 13.5 - 15.5. The followings were obtained for the irradiation of X-ray. (1) The response range is 10 3 - 10 5 R or rad. (2) The sensitivity for the absorbed dose is independent of the energy but is approximately proportional to the film thickness. (3) The sensitivity is higher than that for electrons by the factor of ca. 1.6, reflecting the higher G(HCl) value, 23 - 25. (author)

  1. A shorter and more specific oral sensitization-based experimental model of food allergy in mice.

    Science.gov (United States)

    Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica

    2012-07-31

    Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. In silico modeling predicts drug sensitivity of patient-derived cancer cells.

    Science.gov (United States)

    Pingle, Sandeep C; Sultana, Zeba; Pastorino, Sandra; Jiang, Pengfei; Mukthavaram, Rajesh; Chao, Ying; Bharati, Ila Sri; Nomura, Natsuko; Makale, Milan; Abbasi, Taher; Kapoor, Shweta; Kumar, Ansu; Usmani, Shahabuddin; Agrawal, Ashish; Vali, Shireen; Kesari, Santosh

    2014-05-21

    Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling ("omics") data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.

  3. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    Science.gov (United States)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  4. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model.

    Science.gov (United States)

    Bolster, Carl H; Vadas, Peter A

    2013-07-01

    Models are often used to predict phosphorus (P) loss from agricultural fields. Although it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study we assessed the effect of model input error on predictions of annual P loss by the Annual P Loss Estimator (APLE) model. Our objectives were (i) to conduct a sensitivity analyses for all APLE input variables to determine which variables the model is most sensitive to, (ii) to determine whether the relatively easy-to-implement first-order approximation (FOA) method provides accurate estimates of model prediction uncertainties by comparing results with the more accurate Monte Carlo simulation (MCS) method, and (iii) to evaluate the performance of the APLE model against measured P loss data when uncertainties in model predictions and measured data are included. Our results showed that for low to moderate uncertainties in APLE input variables, the FOA method yields reasonable estimates of model prediction uncertainties, although for cases where manure solid content is between 14 and 17%, the FOA method may not be as accurate as the MCS method due to a discontinuity in the manure P loss component of APLE at a manure solid content of 15%. The estimated uncertainties in APLE predictions based on assumed errors in the input variables ranged from ±2 to 64% of the predicted value. Results from this study highlight the importance of including reasonable estimates of model uncertainty when using models to predict P loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  5. Modelling Nd-isotopes with a coarse resolution ocean circulation model: Sensitivities to model parameters and source/sink distributions

    International Nuclear Information System (INIS)

    Rempfer, Johannes; Stocker, Thomas F.; Joos, Fortunat; Dutay, Jean-Claude; Siddall, Mark

    2011-01-01

    The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleo-oceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143 Nd, and 144 Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and ε Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as ε Nd in good agreement with observations. ε Nd co-varies with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and ε Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects ε Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and ε Nd . Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and ε Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux

  6. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    Science.gov (United States)

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  7. Diagnosis and Quantification of Climatic Sensitivity of Carbon Fluxes in Ensemble Global Ecosystem Models

    Science.gov (United States)

    Wang, W.; Hashimoto, H.; Milesi, C.; Nemani, R. R.; Myneni, R.

    2011-12-01

    Terrestrial ecosystem models are primary scientific tools to extrapolate our understanding of ecosystem functioning from point observations to global scales as well as from the past climatic conditions into the future. However, no model is nearly perfect and there are often considerable structural uncertainties existing between different models. Ensemble model experiments thus become a mainstream approach in evaluating the current status of global carbon cycle and predicting its future changes. A key task in such applications is to quantify the sensitivity of the simulated carbon fluxes to climate variations and changes. Here we develop a systematic framework to address this question solely by analyzing the inputs and the outputs from the models. The principle of our approach is to assume the long-term (~30 years) average of the inputs/outputs as a quasi-equlibrium of the climate-vegetation system while treat the anomalies of carbon fluxes as responses to climatic disturbances. In this way, the corresponding relationships can be largely linearized and analyzed using conventional time-series techniques. This method is used to characterize three major aspects of the vegetation models that are mostly important to global carbon cycle, namely the primary production, the biomass dynamics, and the ecosystem respiration. We apply this analytical framework to quantify the climatic sensitivity of an ensemble of models including CASA, Biome-BGC, LPJ as well as several other DGVMs from previous studies, all driven by the CRU-NCEP climate dataset. The detailed analysis results are reported in this study.

  8. Sensitivity analysis of complex models: Coping with dynamic and static inputs

    International Nuclear Information System (INIS)

    Anstett-Collin, F.; Goffart, J.; Mara, T.; Denis-Vidal, L.

    2015-01-01

    In this paper, we address the issue of conducting a sensitivity analysis of complex models with both static and dynamic uncertain inputs. While several approaches have been proposed to compute the sensitivity indices of the static inputs (i.e. parameters), the one of the dynamic inputs (i.e. stochastic fields) have been rarely addressed. For this purpose, we first treat each dynamic as a Gaussian process. Then, the truncated Karhunen–Loève expansion of each dynamic input is performed. Such an expansion allows to generate independent Gaussian processes from a finite number of independent random variables. Given that a dynamic input is represented by a finite number of random variables, its variance-based sensitivity index is defined by the sensitivity index of this group of variables. Besides, an efficient sampling-based strategy is described to estimate the first-order indices of all the input factors by only using two input samples. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties (static inputs) and the weather data (dynamic inputs) on the energy performance of a real low energy consumption house. - Highlights: • Sensitivity analysis of models with uncertain static and dynamic inputs is performed. • Karhunen–Loève (KL) decomposition of the spatio/temporal inputs is performed. • The influence of the dynamic inputs is studied through the modes of the KL expansion. • The proposed approach is applied to a building energy model. • Impact of weather data and material properties on performance of real house is given

  9. Sensitivity of subject-specific models to errors in musculo-skeletal geometry.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2012-09-21

    Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in musculo-skeletal geometry on subject-specific model results. We performed an extensive sensitivity analysis to quantify the effect of the perturbation of origin, insertion and via points of each of the 56 musculo-tendon parts contained in the model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by only the perturbed musculo-tendon parts and by all the remaining musculo-tendon parts, respectively, during a simulated gait cycle. Results indicated that, for each musculo-tendon part, only two points show a significant sensitivity: its origin, or pseudo-origin, point and its insertion, or pseudo-insertion, point. The most sensitive points belong to those musculo-tendon parts that act as prime movers in the walking movement (insertion point of the Achilles Tendon: LSI=15.56%, OSI=7.17%; origin points of the Rectus Femoris: LSI=13.89%, OSI=2.44%) and as hip stabilizers (insertion points of the Gluteus Medius Anterior: LSI=17.92%, OSI=2.79%; insertion point of the Gluteus Minimus: LSI=21.71%, OSI=2.41%). The proposed priority list provides quantitative information to improve the predictive accuracy of subject-specific musculo-skeletal models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan

    Science.gov (United States)

    Milando, Chad W.; Batterman, Stuart A.

    2018-05-01

    The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.

  11. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...... compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute...

  12. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  13. Sensitivity of tsunami evacuation modeling to direction and land cover assumptions

    Science.gov (United States)

    Schmidtlein, Mathew C.; Wood, Nathan J.

    2015-01-01

    Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.

  14. QSAR Study of Skin Sensitization Using Local Lymph Node Assay Data

    Directory of Open Access Journals (Sweden)

    Eugene Demchuk

    2004-01-01

    Full Text Available Abstract: Allergic Contact Dermatitis (ACD is a common work-related skin disease that often develops as a result of repetitive skin exposures to a sensitizing chemical agent. A variety of experimental tests have been suggested to assess the skin sensitization potential. We applied a method of Quantitative Structure-Activity Relationship (QSAR to relate measured and calculated physical-chemical properties of chemical compounds to their sensitization potential. Using statistical methods, each of these properties, called molecular descriptors, was tested for its propensity to predict the sensitization potential. A few of the most informative descriptors were subsequently selected to build a model of skin sensitization. In this work sensitization data for the murine Local Lymph Node Assay (LLNA were used. In principle, LLNA provides a standardized continuous scale suitable for quantitative assessment of skin sensitization. However, at present many LLNA results are still reported on a dichotomous scale, which is consistent with the scale of guinea pig tests, which were widely used in past years. Therefore, in this study only a dichotomous version of the LLNA data was used. To the statistical end, we relied on the logistic regression approach. This approach provides a statistical tool for investigating and predicting skin sensitization that is expressed only in categorical terms of activity and nonactivity. Based on the data of compounds used in this study, our results suggest a QSAR model of ACD that is based on the following descriptors: nDB (number of double bonds, C-003 (number of CHR3 molecular subfragments, GATS6M (autocorrelation coefficient and HATS6m (GETAWAY descriptor, although the relevance of the identified descriptors to the continuous ACD QSAR has yet to be shown. The proposed QSAR model gives a percentage of positively predicted responses of 83% on the training set of compounds, and in cross validation it correctly identifies 79% of

  15. Garcinia mangostana Linn displays antidepressant-like and pro-cognitive effects in a genetic animal model of depression: a bio-behavioral study in the Flinders Sensitive Line rat.

    Science.gov (United States)

    Oberholzer, Inge; Möller, Marisa; Holland, Brendan; Dean, Olivia M; Berk, Michael; Harvey, Brian H

    2018-04-01

    There is abundant evidence for both disorganized redox balance and cognitive deficits in major depressive disorder (MDD). Garcinia mangostana Linn (GM) has anti-oxidant activity. We studied the antidepressant-like and pro-cognitive effects of raw GM rind in Flinders Sensitive Line (FSL) rats, a genetic model of depression, following acute and chronic treatment compared to a reference antidepressant, imipramine (IMI). The chemical composition of the GM extract was analysed for levels of α- and γ-mangostin. The acute dose-dependent effects of GM (50, 150 and 200 mg/kg po), IMI (20 mg/kg po) and vehicle were determined in the forced swim test (FST) in FSL rats, versus Flinders Resistant Line (FRL) control rats. Locomotor testing was conducted using the open field test (OFT). Using the most effective dose above coupled with behavioral testing in the FST and cognitive assessment in the novel object recognition test (nORT), a fixed dose 14-day treatment study of GM was performed and compared to IMI- (20 mg/kg/day) and vehicle-treated animals. Chronic treated animals were also assessed with respect to frontal cortex and hippocampal monoamine levels and accumulation of malondialdehyde. FSL rats showed significant cognitive deficits and depressive-like behavior, with disordered cortico-hippocampal 5-hydroxyindole acetic acid (5-HIAA) and noradrenaline (NA), as well as elevated hippocampal lipid peroxidation. Acute and chronic IMI treatment evoked pronounced antidepressant-like effects. Raw GM extract contained 117 mg/g and 11 mg/g α- and γ-mangostin, respectively, with acute GM demonstrating antidepressant-like effects at 50 mg/kg/day. Chronic GM (50 mg/kg/d) displayed significant antidepressant- and pro-cognitive effects, while demonstrating parity with IMI. Both behavioral and monoamine assessments suggest a more prominent serotonergic action for GM as opposed to a noradrenergic action for IMI, while both IMI and GM reversed hippocampal lipid peroxidation in

  16. Process analysis and sensitivity study of regional ozone formation over the Pearl River Delta, China, during the PRIDE-PRD2004 campaign using the Community Multiscale Air Quality modeling system

    Directory of Open Access Journals (Sweden)

    X. Wang

    2010-05-01

    Full Text Available In this study, the Community Multiscale Air Quality (CMAQ modeling system is used to simulate the ozone (O3 episodes during the Program of Regional Integrated Experiments of Air Quality over the Pearl River Delta, China, in October 2004 (PRIDE-PRD2004. The simulation suggests that O3 pollution is a regional phenomenon in the Pearl River Delta (PRD. Elevated O3 levels often occurred in the southwestern inland PRD, Pearl River estuary (PRE, and southern coastal areas during the 1-month field campaign. Three evolution patterns of simulated surface O3 are summarized based on different near-ground flow conditions. More than 75% of days featured interactions between weak synoptic forcing and local sea-land circulation. Integrated process rate (IPR analysis shows that photochemical production is a dominant contributor to O3 enhancement from 09:00 to 15:00 local standard time in the atmospheric boundary layer over most areas with elevated O3 occurrence in the mid-afternoon. The simulated ozone production efficiency is 2–8 O3 molecules per NOx molecule oxidized in areas with high O3 chemical production. Precursors of O3 originating from different source regions in the central PRD are mixed during the course of transport to downwind rural areas during nighttime and early morning, where they then contribute to the daytime O3 photochemical production. The sea-land circulation plays an important role on the regional O3 formation and distribution over PRD. Sensitivity studies suggest that O3 formation is volatile-organic-compound-limited in the central inland PRD, PRE, and surrounding coastal areas with less chemical aging (NOx/NOy>0.6, but is NOx-limited in the rural southwestern PRD with aged air (NOx/NOy<0.3.

  17. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  18. Isoprene emissions modelling for West Africa: MEGAN model evaluation and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Ferreira

    2010-09-01

    Full Text Available Isoprene emissions are the largest source of reactive carbon to the atmosphere, with the tropics being a major source region. These natural emissions are expected to change with changing climate and human impact on land use. As part of the African Monsoon Multidisciplinary Analyses (AMMA project the Model of Emissions of Gases and Aerosols from Nature (MEGAN has been used to estimate the spatial and temporal distribution of isoprene emissions over the West African region. During the AMMA field campaign, carried out in July and August 2006, isoprene mixing ratios were measured on board the FAAM BAe-146 aircraft. These data have been used to make a qualitative evaluation of the model performance.

    MEGAN was firstly applied to a large area covering much of West Africa from the Gulf of Guinea in the south to the desert in the north and was able to capture the large scale spatial distribution of isoprene emissions as inferred from the observed isoprene mixing ratios. In particular the model captures the transition from the forested area in the south to the bare soils in the north, but some discrepancies have been identified over the bare soil, mainly due to the emission factors used. Sensitivity analyses were performed to assess the model response to changes in driving parameters, namely Leaf Area Index (LAI, Emission Factors (EF, temperature and solar radiation.

    A high resolution simulation was made of a limited area south of Niamey, Niger, where the higher concentrations of isoprene were observed. This is used to evaluate the model's ability to simulate smaller scale spatial features and to examine the influence of the driving parameters on an hourly basis through a case study of a flight on 17 August 2006.

    This study highlights the complex interactions between land surface processes and the meteorological dynamics and chemical composition of the PBL. This has implications for quantifying the impact of biogenic emissions

  19. Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    2017-11-01

    The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.

  20. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  1. Performance Modeling of Mimosa pudica Extract as a Sensitizer for Solar Energy Conversion

    Directory of Open Access Journals (Sweden)

    M. B. Shitta

    2016-01-01

    Full Text Available An organic material is proposed as a sustainable sensitizer and a replacement for the synthetic sensitizer in a dye-sensitized solar cell technology. Using the liquid extract from the leaf of a plant called Mimosa pudica (M. pudica as a sensitizer, the performance characteristics of the extract of M. pudica are investigated. The photo-anode of each of the solar cell sample is passivated with a self-assembly monolayer (SAM from a set of four materials, including alumina, formic acid, gelatine, and oxidized starch. Three sets of five samples of an M. pudica–based solar cell are produced, with the fifth sample used as the control experiment. Each of the solar cell samples has an active area of 0.3848cm2. A two-dimensional finite volume method (FVM is used to model the transport of ions within the monolayer of the solar cell. The performance of the experimentally fabricated solar cells compares qualitatively with the ones obtained from the literature and the simulated solar cells. The highest efficiency of 3% is obtained from the use of the extract as a sensitizer. It is anticipated that the comparison of the performance characteristics with further research on the concentration of M. pudica extract will enhance the development of a reliable and competitive organic solar cell. It is also recommended that further research should be carried out on the concentration of the extract and electrolyte used in this study for a possible improved performance of the cell.

  2. Gamma ray induced sensitization in CaSO4:Dy and competing trap model

    International Nuclear Information System (INIS)

    Nagpal, J.S.; Kher, R.K.; Gangadharan, P.

    1979-01-01

    Gamma ray induced sensitization in CaSO 4 :Dy has been compared (by measurement of TL glow curves) for different temperatures during irradiation (25 0 , 120 0 and 250 0 C). Enhanced sensitization at elevated temperatures seems to support the competing trap model for supralinearity and sensitization in CaSO 4 :Dy. (author)

  3. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  4. Bayesian randomized item response modeling for sensitive measurements

    NARCIS (Netherlands)

    Avetisyan, Marianna

    2012-01-01

    In behavioral, health, and social sciences, any endeavor involving measurement is directed at accurate representation of the latent concept with the manifest observation. However, when sensitive topics, such as substance abuse, tax evasion, or felony, are inquired, substantial distortion of reported

  5. Computational Study of pH-sensitive Hydrogel-based Microfluidic Flow Controllers

    Science.gov (United States)

    Kurnia, Jundika C.; Birgersson, Erik; Mujumdar, Arun S.

    2011-01-01

    This computational study investigates the sensing and actuating behavior of a pH-sensitive hydrogel-based microfluidic flow controller. This hydrogel-based flow controller has inherent advantage in its unique stimuli-sensitive properties, removing the need for an external power supply. The predicted swelling behavior the hydrogel is validated with steady-state and transient experiments. We then demonstrate how the model is implemented to study the sensing and actuating behavior of hydrogels for different microfluidic flow channel/hydrogel configurations: e.g., for flow in a T-junction with single and multiple hydrogels. In short, the results suggest that the response of the hydrogel-based flow controller is slow. Therefore, two strategies to improve the response rate of the hydrogels are proposed and demonstrated. Finally, we highlight that the model can be extended to include other stimuli-responsive hydrogels such as thermo-, electric-, and glucose-sensitive hydrogels. PMID:24956303

  6. Aggressive Behavior between Siblings and the Development of Externalizing Problems: Evidence from a Genetically Sensitive Study

    Science.gov (United States)

    Natsuaki, Misaki N.; Ge, Xiaojia; Reiss, David; Neiderhiser, Jenae M.

    2009-01-01

    This study investigated the prospective links between sibling aggression and the development of externalizing problems using a multilevel modeling approach with a genetically sensitive design. The sample consisted of 780 adolescents (390 sibling pairs) who participated in 2 waves of the Nonshared Environment in Adolescent Development project.…

  7. Global sensitivity analysis of a dynamic model for gene expression in Drosophila embryos

    Science.gov (United States)

    McCarthy, Gregory D.; Drewell, Robert A.

    2015-01-01

    It is well known that gene regulation is a tightly controlled process in early organismal development. However, the roles of key processes involved in this regulation, such as transcription and translation, are less well understood, and mathematical modeling approaches in this field are still in their infancy. In recent studies, biologists have taken precise measurements of protein and mRNA abundance to determine the relative contributions of key factors involved in regulating protein levels in mammalian cells. We now approach this question from a mathematical modeling perspective. In this study, we use a simple dynamic mathematical model that incorporates terms representing transcription, translation, mRNA and protein decay, and diffusion in an early Drosophila embryo. We perform global sensitivity analyses on this model using various different initial conditions and spatial and temporal outputs. Our results indicate that transcription and translation are often the key parameters to determine protein abundance. This observation is in close agreement with the experimental results from mammalian cells for various initial conditions at particular time points, suggesting that a simple dynamic model can capture the qualitative behavior of a gene. Additionally, we find that parameter sensitivites are temporally dynamic, illustrating the importance of conducting a thorough global sensitivity analysis across multiple time points when analyzing mathematical models of gene regulation. PMID:26157608

  8. Parameter sensitivity and uncertainty of the forest carbon flux model FORUG : a Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Verbeeck, H.; Samson, R.; Lemeur, R. [Ghent Univ., Ghent (Belgium). Laboratory of Plant Ecology; Verdonck, F. [Ghent Univ., Ghent (Belgium). Dept. of Applied Mathematics, Biometrics and Process Control

    2006-06-15

    The FORUG model is a multi-layer process-based model that simulates carbon dioxide (CO{sub 2}) and water exchange between forest stands and the atmosphere. The main model outputs are net ecosystem exchange (NEE), total ecosystem respiration (TER), gross primary production (GPP) and evapotranspiration. This study used a sensitivity analysis to identify the parameters contributing to NEE uncertainty in the FORUG model. The aim was to determine if it is necessary to estimate the uncertainty of all parameters of a model to determine overall output uncertainty. Data used in the study were the meteorological and flux data of beech trees in Hesse. The Monte Carlo method was used to rank sensitivity and uncertainty parameters in combination with a multiple linear regression. Simulations were run in which parameters were assigned probability distributions and the effect of variance in the parameters on the output distribution was assessed. The uncertainty of the output for NEE was estimated. Based on the arbitrary uncertainty of 10 key parameters, a standard deviation of 0.88 Mg C per year per NEE was found, which was equal to 24 per cent of the mean value of NEE. The sensitivity analysis showed that the overall output uncertainty of the FORUG model could be determined by accounting for only a few key parameters, which were identified as corresponding to critical parameters in the literature. It was concluded that the 10 most important parameters determined more than 90 per cent of the output uncertainty. High ranking parameters included soil respiration; photosynthesis; and crown architecture. It was concluded that the Monte Carlo technique is a useful tool for ranking the uncertainty of parameters of process-based forest flux models. 48 refs., 2 tabs., 2 figs.

  9. Sensitivity analysis of Repast computational ecology models with R/Repast.

    Science.gov (United States)

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  10. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  12. Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction

    Science.gov (United States)

    Yurkovich, S.; Bugajski, D.; Sain, M.

    1985-01-01

    The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.

  13. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  14. Encoding Context-Sensitivity in Reo into Non-Context-Sensitive Semantic Models (Technical Report)

    NARCIS (Netherlands)

    S.-S.T.Q. Jongmans (Sung-Shik); , C. (born Köhler, , C.) Krause (Christian); F. Arbab (Farhad)

    2011-01-01

    textabstractReo is a coordination language which can be used to model the interactions among a set of components or services in a compositional manner using connectors. The language concepts of Reo include synchronization, mutual exclusion, data manipulation, memory and context-dependency.

  15. Sensitivity of surface temperature to radiative forcing by contrail cirrus in a radiative-mixing model

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2017-11-01

    Full Text Available Earth's surface temperature sensitivity to radiative forcing (RF by contrail cirrus and the related RF efficacy relative to CO2 are investigated in a one-dimensional idealized model of the atmosphere. The model includes energy transport by shortwave (SW and longwave (LW radiation and by mixing in an otherwise fixed reference atmosphere (no other feedbacks. Mixing includes convective adjustment and turbulent diffusion, where the latter is related to the vertical component of mixing by large-scale eddies. The conceptual study shows that the surface temperature sensitivity to given contrail RF depends strongly on the timescales of energy transport by mixing and radiation. The timescales are derived for steady layered heating (ghost forcing and for a transient contrail cirrus case. The radiative timescales are shortest at the surface and shorter in the troposphere than in the mid-stratosphere. Without mixing, a large part of the energy induced into the upper troposphere by radiation due to contrails or similar disturbances gets lost to space before it can contribute to surface warming. Because of the different radiative forcing at the surface and at top of atmosphere (TOA and different radiative heating rate profiles in the troposphere, the local surface temperature sensitivity to stratosphere-adjusted RF is larger for SW than for LW contrail forcing. Without mixing, the surface energy budget is more important for surface warming than the TOA budget. Hence, surface warming by contrails is smaller than suggested by the net RF at TOA. For zero mixing, cooling by contrails cannot be excluded. This may in part explain low efficacy values for contrails found in previous global circulation model studies. Possible implications of this study are discussed. Since the results of this study are model dependent, they should be tested with a comprehensive climate model in the future.

  16. A sensitivity study for soil-structure interaction

    International Nuclear Information System (INIS)

    Kunar, R.R.; White, D.C.; Ashdown, M.J.; Waker, C.H.; Daintith, D.

    1981-01-01

    This paper presents the results of a study in which the sensitivity of a containment building typical of one type of construction used in the nuclear reprocessing industry is examined for variations in soil data and seismic input. A number of dynamic soil-structure interaction analyses are performed on the structure and its foundations using parametric variations of the depth of soil layer, soil material properties, bedrock flexibility, seismic input location and time and phase characteristics of the earthquake excitation. Previous experience is combined with the results obtained to gneralise conclusions regarding the conditions under which each of the uncertainties will be significant enough to merit proper statistical treatment. (orig.)

  17. Ambient pressure sensitivity of microbubbles investigated through a parameter study

    DEFF Research Database (Denmark)

    Andersen, Klaus Scheldrup; Jensen, Jørgen Arendt

    2009-01-01

    Measurements on microbubbles clearly indicate a relation between the ambient pressure and the acoustic behavior of the bubble. The purpose of this study was to optimize the sensitivity of ambient pressure measurements, using the subharmonic component, through microbubble response simulations....... The behavior of two microbubbles corresponding to two different contrast agents was investigated as a function of driving pulse and ambient overpressure, pov. Simulations of Levovist using a rectangular driving pulse show an almost linear reduction in the subharmonic component as pov is increased. For a 20...... found, although the reduction is not completely linear as a function of the ambient pressure....

  18. Parameter Sensitivity Study for Typical Expander-Based Transcritical CO2 Refrigeration Cycles

    Directory of Open Access Journals (Sweden)

    Bo Zhang

    2018-05-01

    Full Text Available A sensitivity study was conducted for three typical expander-based transcritical CO2 cycles with the developed simulation model, and the sensitivities of the maximum coefficient of performance (COP to the key operating parameters, including the inlet pressure of gas cooler, the temperatures at evaporator inlet and gas cooler outlet, the inter-stage pressure and the isentropic efficiency of expander, were obtained. The results showed that the sensitivity to the gas cooler inlet pressure differs greatly before and after the optimal gas cooler inlet pressure. The sensitivity to the intercooler outlet temperature in the two-stage cycles increases sharply to near zero and then keeps almost constant at intercooler outlet temperature of higher than 45 °C. However, the sensitivity stabilizes near zero when the evaporator inlet temperature is very low of −26.1 °C. In two-stage compression with an intercooler and an expander assisting in driving the first-stage compressor (TEADFC cycle, an abrupt change in the sensitivity of maximum COP to the inter-stage pressure was observed, but disappeared after intercooler outlet temperature exceeds 50 °C. The sensitivity of maximum COP to the expander isentropic efficiency increases almost linearly with the expander isentropic efficiency.

  19. Uncertainty and sensitivity studies supporting the interpretation of the results of TVO I/II PRA

    International Nuclear Information System (INIS)

    Holmberg, J.

    1992-01-01

    A comprehensive Level 1 probabilistic risk assessment (PRA) has been performed for the TVO I/II nuclear power units. As a part of the PRA project, uncertainties of risk models and methods were systematically studied in order to describe them and to demonstrate their impact by way of results. The uncertainty study was divided into two phases: a qualitative and a quantitative study. The qualitative study contained identification of uncertainties and qualitative assessments of their importance. The PRA was introduced, and identified assumptions and uncertainties behind the models were documented. The most significant uncertainties were selected by importance measures or other judgements for further quantitative studies. The quantitative study included sensitivity studies and propagation of uncertainty ranges. In the sensitivity studies uncertain assumptions or parameters were varied in order to illustrate the sensitivity of the models. The propagation of the uncertainty ranges demonstrated the impact of the statistical uncertainties of the parameter values. The Monte Carlo method was used as a propagation method. The most significant uncertainties were those involved in modelling human interactions, dependences and common cause failures (CCFs), loss of coolant accident (LOCA) frequencies and pressure suppression. The qualitative mapping out of the uncertainty factors turned out to be useful in planning quantitative studies. It also served as internal review of the assumptions made in the PRA. The sensitivity studies were perhaps the most advantageous part of the quantitative study because they allowed individual analyses of the significance of uncertainty sources identified. The uncertainty study was found reasonable in systematically and critically assessing uncertainties in a risk analysis. The usefulness of this study depends on the decision maker (power company) since uncertainty studies are primarily carried out to support decision making when uncertainties are

  20. Two-Dimensional Modeling of Heat and Moisture Dynamics in Swedish Roads: Model Set up and Parameter Sensitivity

    Science.gov (United States)

    Rasul, H.; Wu, M.; Olofsson, B.

    2017-12-01

    Modelling moisture and heat changes in road layers is very important to understand road hydrology and for better construction and maintenance of roads in a sustainable manner. In cold regions due to the freezing/thawing process in the partially saturated material of roads, the modeling task will become more complicated than simple model of flow through porous media without freezing/thawing pores considerations. This study is presenting a 2-D model simulation for a section of highway with considering freezing/thawing and vapor changes. Partial deferential equations (PDEs) are used in formulation of the model. Parameters are optimized from modelling results based on the measured data from test station on E18 highway near Stockholm. Impacts of phase change considerations in the modelling are assessed by comparing the modeled soil moisture with TDR-measured data. The results show that the model can be used for prediction of water and ice content in different layers of the road and at different seasons. Parameter sensitivities are analyzed by implementing a calibration strategy. In addition, the phase change consideration is evaluated in the modeling process, by comparing the PDE model with another model without considerations of freezing/thawing in roads. The PDE model shows high potential in understanding the moisture dynamics in the road system.

  1. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.

    1992-01-01

    The work done on this project was focussed mainly on LAMPF experiment E969 known as the MEGA experiment, a high sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ→eγ]/[μ→e ν μ ν e ] ∼10 -13 is over two orders of magnitude better than previously reported values. The work done on MEGA during this period was divided between that done at Valparaiso University and that done at LAMPF. In addition, some contributions were made to a proposal to the LAMPF PAC to perform a precision measurement of the Michel ρ parameter, described below

  2. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    1994-01-01

    The work done on this project focused on two LAMPF experiments. The MEGA experiment is a high-sensitivity search for the lepton family number violating decay μ → eγ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → eγ]/[μ eν μ ν e ] ∼ 10 -13 , will be over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → eν μ ν e to test the predictions V-A theory of weak interactions. In this experiment the uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value. The detectors are operational, and data taking has begun

  3. High sensitivity tests of the standard model for electroweak interactions

    International Nuclear Information System (INIS)

    Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.

    1993-01-01

    The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay μ → e γ to a sensitivity which, measured in terms of the branching ratio, BR = [μ → e γ]/[μ → ev μ v e ] ∼ 10 -13 , is over two orders of magnitude better than previously reported values. The second is a precision measurement of the Michel ρ parameter from the positron energy spectrum of μ → ev μ v e to test the V-A theory of weak interactions. The uncertainty in the measurement of the Michel ρ parameter is expected to be a factor of three lower than the present reported value

  4. Sensitivity analysis of the noise-induced oscillatory multistability in Higgins model of glycolysis

    Science.gov (United States)

    Ryashko, Lev

    2018-03-01

    A phenomenon of the noise-induced oscillatory multistability in glycolysis is studied. As a basic deterministic skeleton, we consider the two-dimensional Higgins model. The noise-induced generation of mixed-mode stochastic oscillations is studied in various parametric zones. Probabilistic mechanisms of the stochastic excitability of equilibria and noise-induced splitting of randomly forced cycles are analysed by the stochastic sensitivity function technique. A parametric zone of supersensitive Canard-type cycles is localized and studied in detail. It is shown that the generation of mixed-mode stochastic oscillations is accompanied by the noise-induced transitions from order to chaos.

  5. Application of the pertubation theory to a two channels model for sensitivity calculations in PWR cores

    International Nuclear Information System (INIS)

    Oliveira, A.C.J.G. de; Andrade Lima, F.R. de

    1989-01-01

    The present work is an application of the perturbation theory (Matricial formalism) to a simplified two channels model, for sensitivity calculations in PWR cores. Expressions for some sensitivity coefficients of thermohydraulic interest were developed from the proposed model. The code CASNUR.FOR was written in FORTRAN to evaluate these sensitivity coefficients. The comparison between results obtained from the matrical formalism of pertubation theory with those obtained directly from the two channels model, makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations. (author) [pt

  6. Sensitivity studies for a space-based methane lidar mission

    Directory of Open Access Journals (Sweden)

    C. Kiemle

    2011-10-01

    Full Text Available Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN on a small satellite in polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1% over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol

  7. Studies on radiation-sensitive nonsilver halide materials, (1)

    International Nuclear Information System (INIS)

    Komizu, Hideo; Honda, Koichi; Yabe, Akira; Kawasaki, Masami; Fujii, Etsuo

    1978-01-01

    In order to discover new radiation-sensitive nonsilver halide materials, the coloration based on the formation of Stenhouse salts was studied in the following three systems: (a) furfural-amine/HCl aq/methanol solution, (b) furfural-amine/polyhalogenide/PMMA matrix, (c) furfural-amine/PVC matrix. Firstly, forty-five aromatic amines were surveyed to find out the amines suitable for the color precursors (reactant from furfural and amine) in the system (a). As a result, the five amines, which gave the precursors in good yields by the reaction with furfural, were selected: m-nitroaniline, N-methylaniline, m-methyl-N-methylaniline, aniline, and o-methoxyaniline. Secondly, the coloration induced by electron beam bombardment was studied in the systems (b) and (c) containing the color precursors (the reactants from these amines and furfural). Although the PMMA films containing the color precursors and polyhalogenides were sensitive to electron beam, they were not stable when standing under daylight at room temperature. The PVC films containing the color precursors were very stable and colored to reddish yellow (lambda sub(max) 498 - 545 nm) by electron beam bombardment. The PVC film containing N-methylaniline-furfural was the most sensitive and the increase in absorbance at 498 nm was 0.78 by electron beam bombardment of 60 kV - 7.5 x 10 -7 C/cm 2 . A good linear relationship existed between the degree of coloration and the amounts of electron beam bombardment in the range from 0 to 10 -6 C/cm 2 . (author)

  8. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  9. An Investigation on the Sensitivity of the Parameters of Urban Flood Model

    Science.gov (United States)

    M, A. B.; Lohani, B.; Jain, A.

    2015-12-01

    Global climatic change has triggered weather patterns which lead to heavy and sudden rainfall in different parts of world. The impact of heavy rainfall is severe especially on urban areas in the form of urban flooding. In order to understand the effect of heavy rainfall induced flooding, it is necessary to model the entire flooding scenario more accurately, which is now becoming possible with the availability of high resolution airborne LiDAR data and other real time observations. However, there is not much understanding on the optimal use of these data and on the effect of other parameters on the performance of the flood model. This study aims at developing understanding on these issues. In view of the above discussion, the aim of this study is to (i) understand that how the use of high resolution LiDAR data improves the performance of urban flood model, and (ii) understand the sensitivity of various hydrological parameters on urban flood modelling. In this study, modelling of flooding in urban areas due to heavy rainfall is carried out considering Indian Institute of Technology (IIT) Kanpur, India as the study site. The existing model MIKE FLOOD, which is accepted by Federal Emergency Management Agency (FEMA), is used along with the high resolution airborne LiDAR data. Once the model is setup it is made to run by changing the parameters such as resolution of Digital Surface Model (DSM), manning's roughness, initial losses, catchment description, concentration time, runoff reduction factor. In order to realize this, the results obtained from the model are compared with the field observations. The parametric study carried out in this work demonstrates that the selection of catchment description plays a very important role in urban flood modelling. Results also show the significant impact of resolution of DSM, initial losses and concentration time on urban flood model. This study will help in understanding the effect of various parameters that should be part of a

  10. Sensitivity analysis and calibration of a dynamic physically based slope stability model