WorldWideScience

Sample records for model sensitivity analyses

  1. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  2. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  3. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  4. Three-dimensional lake water quality modeling: sensitivity and uncertainty analyses.

    Science.gov (United States)

    Missaghi, Shahram; Hondzo, Miki; Melching, Charles

    2013-11-01

    Two sensitivity and uncertainty analysis methods are applied to a three-dimensional coupled hydrodynamic-ecological model (ELCOM-CAEDYM) of a morphologically complex lake. The primary goals of the analyses are to increase confidence in the model predictions, identify influential model parameters, quantify the uncertainty of model prediction, and explore the spatial and temporal variabilities of model predictions. The influence of model parameters on four model-predicted variables (model output) and the contributions of each of the model-predicted variables to the total variations in model output are presented. The contributions of predicted water temperature, dissolved oxygen, total phosphorus, and algal biomass contributed 3, 13, 26, and 58% of total model output variance, respectively. The fraction of variance resulting from model parameter uncertainty was calculated by two methods and used for evaluation and ranking of the most influential model parameters. Nine out of the top 10 parameters identified by each method agreed, but their ranks were different. Spatial and temporal changes of model uncertainty were investigated and visualized. Model uncertainty appeared to be concentrated around specific water depths and dates that corresponded to significant storm events. The results suggest that spatial and temporal variations in the predicted water quality variables are sensitive to the hydrodynamics of physical perturbations such as those caused by stream inflows generated by storm events. The sensitivity and uncertainty analyses identified the mineralization of dissolved organic carbon, sediment phosphorus release rate, algal metabolic loss rate, internal phosphorus concentration, and phosphorus uptake rate as the most influential model parameters.

  5. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  6. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  7. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  8. Sampling and sensitivity analyses tools (SaSAT) for computational modelling.

    Science.gov (United States)

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-02-27

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab, a numerical mathematical software package, and utilises algorithms contained in the Matlab Statistics Toolbox. However, Matlab is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  9. Uncertainty and Sensitivity Analyses Plan

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  10. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  11. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  12. A new non-randomized model for analysing sensitive questions with binary outcomes.

    Science.gov (United States)

    Tian, Guo-Liang; Yu, Jun-Wu; Tang, Man-Lai; Geng, Zhi

    2007-10-15

    We propose a new non-randomized model for assessing the association of two sensitive questions with binary outcomes. Under the new model, respondents only need to answer a non-sensitive question instead of the original two sensitive questions. As a result, it can protect a respondent's privacy, avoid the usage of any randomizing device, and be applied to both the face-to-face interview and mail questionnaire. We derive the constrained maximum likelihood estimates of the cell probabilities and the odds ratio for two binary variables associated with the sensitive questions via the EM algorithm. The corresponding standard error estimates are then obtained by bootstrap approach. A likelihood ratio test and a chi-squared test are developed for testing association between the two binary variables. We discuss the loss of information due to the introduction of the non-sensitive question, and the design of the co-operative parameters. Simulations are performed to evaluate the empirical type I error rates and powers for the two tests. In addition, a simulation is conducted to study the relationship between the probability of obtaining valid estimates and the sample size for any given cell probability vector. A real data set from an AIDS study is used to illustrate the proposed methodologies.

  13. Crowd-structure interaction in footbridges: Modelling, application to a real case-study and sensitivity analyses

    Science.gov (United States)

    Bruno, Luca; Venuti, Fiammetta

    2009-06-01

    A mathematical and computational model used to simulate crowd-structure interaction in lively footbridges is presented in this work. The model is based on the mathematical and numerical decomposition of the coupled multiphysical nonlinear system into two interacting subsystems. The model was conceived to simulate the synchronous lateral excitation phenomenon caused by pedestrians walking on footbridges. The model was first applied to simulate a crowd event on an actual footbridge, the T-bridge in Japan. Three sensitivity analyses were then performed on the same benchmark to evaluate the properties of the model. The simulation results show good agreement with the experimental data found in literature and the model could be considered a useful tool for designers and engineers in the different phases of footbridge design.

  14. Tests of methods and software for set-valued model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Janssen PHM; Sanders R; CWM

    1995-01-01

    Testen worden besproken die zijn uitgevoerd op methoden en software voor calibratie middels 'rotated-random-scanning', en voor gevoeligheidsanalyse op basis van de 'dominant direction analysis' en de 'generalized sensitivity analysis'. Deze technieken werden recentel

  15. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  16. Experiments and sensitivity analyses for heat transfer in a meter-scale regularly fractured granite model with water flow

    Institute of Scientific and Technical Information of China (English)

    Wei LU; Yan-yong XIANG

    2012-01-01

    Experiments of saturated water flow and heat transfer were conducted for a meter-scale model of regularly fractured granite.The fractured rock model (height 1502.5 mm,width 904 mm,and thickness 300 mm),embedded with two vertical and two horizontal fractures of pre-set apertures,was constructed using 18 pieces of intact granite.The granite was taken from a site currently being investigated for a high-level nuclear waste repository in China.The experiments involved different heat source temperatures and vertical water fluxes in the embedded fractures either open or filled with sand.A finite difference scheme and computer code for calculation of water flow and heat transfer in regularly fractured rocks was developed,verified against both the experimental data and calculations from the TOUGH2 code,and employed for parametric sensitivity analyses.The experiments revealed that,among other things,the temperature distribution was influenced by water flow in the fractures,especially the water flow in the vertical fracture adjacent to the heat source,and that the heat conduction between the neighboring rock blocks in the model with sand-filled fractures was enhanced by the sand,with larger range of influence of the heat source and longer time for approaching asymptotic steady-state than those of the model with open fractures.The temperatures from the experiments were in general slightly smaller than those from the numerical calculations,probably due to the fact that a certain amount of outward heat transfer at the model perimeter was unavoidable in the experiments.The parametric sensitivity analyses indicated that the temperature distribution was highly sensitive to water flow in the fractures,and the water temperature in the vertical fracture adjacent to the heat source was rather insensitive to water flow in other fractures.

  17. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  18. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  19. Sensitivity to model geometry in finite element analyses of reconstructed skeletal structures: experience with a juvenile pelvis.

    Science.gov (United States)

    Watson, Peter J; Fagan, Michael J; Dobson, Catherine A

    2015-01-01

    Biomechanical analysis of juvenile pelvic growth can be used in the evaluation of medical devices and investigation of hip joint disorders. This requires access to scan data of healthy juveniles, which are not always freely available. This article analyses the application of a geometric morphometric technique, which facilitates the reconstruction of the articulated juvenile pelvis from cadaveric remains, in biomechanical modelling. The sensitivity of variation in reconstructed morphologies upon predicted stress/strain distributions is of particular interest. A series of finite element analyses of a 9-year-old hemi-pelvis were performed to examine differences in predicted strain distributions between a reconstructed model and the originally fully articulated specimen. Only minor differences in the minimum principal strain distributions were observed between two varying hemi-pelvic morphologies and that of the original articulation. A Wilcoxon rank-sum test determined there was no statistical significance between the nodal strains recorded at 60 locations throughout the hemi-pelvic structures. This example suggests that finite element models created by this geometric morphometric reconstruction technique can be used with confidence, and as observed with this hemi-pelvis model, even a visual morphological difference does not significantly affect the predicted results. The validated use of this geometric morphometric reconstruction technique in biomechanical modelling reduces the dependency on clinical scan data.

  20. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  1. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  2. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  3. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  4. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  5. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    Directory of Open Access Journals (Sweden)

    A. Nickless

    2014-05-01

    Full Text Available This is the second part of a two-part paper considering network design based on a Lagrangian stochastic particle dispersion model (LPDM, aimed at reducing the uncertainty of the flux estimates achievable for the region of interest by the continuous observation of atmospheric CO2 concentrations at fixed monitoring stations. The LPDM model, which can be used to derive the sensitivity matrix used in an inversion, was run for each potential site for the months of July (representative of the Southern Hemisphere Winter and January (Summer. The magnitude of the boundary contributions to each potential observation site was tested to determine its inclusion in the network design, but found to be minimal. Through the use of the Bayesian inverse modelling technique, the sensitivity matrix, together with the prior estimates for the covariance matrices of the observations and surface fluxes were used to calculate the posterior covariance matrix of the estimated fluxes, used for the calculation of the cost function of the optimisation procedure. The optimisation procedure was carried out for South Africa under a standard set of conditions, similar to those applied to the Australian test case in Part 1, for both months and for the combined two months. The conditions were subtly changed, one at a time, and the optimisation routine re-run under each set of modified conditions, and compared to the original optimal network design. The results showed that changing the height of the surface grid cells, including an uncertainty estimate for the oceans, or increasing the night time observational uncertainty did not result in any major changes in the positioning of the stations relative to the basic design, but changing the covariance matrix or increasing the spatial resolution did. The genetic algorithm was able to find a slightly better solution than the incremental optimisation procedure, but did not drastically alter the solution compared to the standard case

  6. Global sensitivity analysis of thermomechanical models in modelling of welding; Analyse de sensibilite globale de modeles thermomecanique de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2008-07-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  7. Global sensitivity analysis of thermo-mechanical models in numerical weld modelling; Analyse de sensibilite globale de modeles thermomecaniques de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2007-10-15

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  8. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  9. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available et al., 1999; Rödenbeck et al., 2003; Chevallier et al., 2010). This method relies on precision measurements of atmo- spheric CO2 to refine the prior estimates of the fluxes. Using this theory, an optimal network of new measurement sites... of the South African network design, these variables are produced by the CSIRO Conformal-Cubic Atmospheric Model (CCAM), a global circulation model. CCAM is a two time-level semi-implicit hydrostatic primi- tive equation developed by McGregor (1987) and later...

  10. EXPERIMENTAL DATA, THERMODYNAMIC MODELING AND SENSITIVITY ANALYSES FOR THE PURIFICATION STEPS OF ETHYL BIODIESEL FROM FODDER RADISH OIL PRODUCTION

    Directory of Open Access Journals (Sweden)

    R. C. Basso

    Full Text Available Abstract The goals of this work were to present original liquid-liquid equilibrium data of the system containing glycerol + ethanol + ethyl biodiesel from fodder radish oil, including the individual distribution of each ethyl ester; to adjust binary parameters of the NRTL; to compare NRTL and UNIFAC-Dortmund in the LLE representation of the system containing glycerol; to simulate different mixer/settler flowsheets for biodiesel purification, evaluating the ratio water/biodiesel used. In thermodynamic modeling, the deviations between experimental data and calculated values were 0.97% and 3.6%, respectively, using NRTL and UNIFAC-Dortmund. After transesterification, with 3 moles of excess ethanol, removal of this component until a content equal to 0.08 before an ideal settling step allows a glycerol content lower than 0.02% in the ester-rich phase. Removal of ethanol, glycerol and water from biodiesel can be performed with countercurrent mixer/settler, using 0.27% of water in relation to the ester amount in the feed stream.

  11. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    OpenAIRE

    Monica Carvalho; Lozano, Miguel A.; José Ramos; Serra, Luis M.

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provid...

  12. A mechanistic model of H{sub 2}{sup 18}O and C{sup 18}OO fluxes between ecosystems and the atmosphere: Model description and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Riley, W.J.; Still, C.J.; Torn, M.S.; Berry, J.A.

    2002-01-01

    The concentration of 18O in atmospheric CO2 and H2O is a potentially powerful tracer of ecosystem carbon and water fluxes. In this paper we describe the development of an isotope model (ISOLSM) that simulates the 18O content of canopy water vapor, leaf water, and vertically resolved soil water; leaf photosynthetic 18OC16O (hereafter C18OO) fluxes; CO2 oxygen isotope exchanges with soil and leaf water; soil CO2 and C18OO diffusive fluxes (including abiotic soil exchange); and ecosystem exchange of H218O and C18OO with the atmosphere. The isotope model is integrated into the land surface model LSM, but coupling with other models should be straightforward. We describe ISOLSM and apply it to evaluate (a) simplified methods of predicting the C18OO soil-surface flux; (b) the impacts on the C18OO soil-surface flux of the soil-gas diffusion coefficient formulation, soil CO2 source distribution, and rooting distribution; (c) the impacts on the C18OO fluxes of carbonic anhydrase (CA) activity in soil and leaves; and (d) the sensitivity of model predictions to the d18O value of atmospheric water vapor and CO2. Previously published simplified models are unable to capture the seasonal and diurnal variations in the C18OO soil-surface fluxes simulated by ISOLSM. Differences in the assumed soil CO2 production and rooting depth profiles, carbonic anhydrase activity in soil and leaves, and the d18O value of atmospheric water vapor have substantial impacts on the ecosystem CO2 flux isotopic composition. We conclude that accurate prediction of C18OO ecosystem fluxes requires careful representation of H218O and C18OO exchanges and transport in soils and plants.

  13. Singular vector decomposition for sensitivity analyses of tropospheric chemical scenarios

    Science.gov (United States)

    Goris, N.; Elbern, H.

    2011-06-01

    Observations of the chemical state of the atmosphere typically provide only sparse snapshots of the state of the system due to their insufficient temporal and spatial density. Therefore the measurement configurations need to be optimised to get a best possible state estimate. One possibility to optimise the state estimate is provided by observation targeting of sensitive system states, to identify measurement configurations of best value for forecast improvements. In recent years, numerical weather prediction adapted singular vector analysis with respect to initial values as a novel method to identify sensitive states. In the present work, this technique is transferred from meteorological to chemical forecast. Besides initial values, emissions are investigated as controlling variables. More precisely uncertainties in the amplitude of the diurnal profile of emissions are analysed, yielding emission factors as target variables. Singular vector analysis is extended to allow for projected target variables not only at final time but also at initial time. Further, special operators are introduced, which consider the combined influence of groups of chemical species. As a preparation for targeted observation calculations, the concept of adaptive observations is studied with a chemistry box model. For a set of six different scenarios, the VOC versus NOx limitation of the ozone formation is investigated. Results reveal, that the singular vectors are strongly dependent on start time and length of the simulation. As expected, singular vectors with initial values as target variables tend to be more sensitive to initial values, while emission factors as target variables are more sensitive to simulation length. Further, the particular importance of chemical compounds differs strongly between absolute and relative error growth.

  14. Singular vector decomposition for sensitivity analyses of tropospheric chemical scenarios

    Directory of Open Access Journals (Sweden)

    N. Goris

    2011-06-01

    Full Text Available Observations of the chemical state of the atmosphere typically provide only sparse snapshots of the state of the system due to their insufficient temporal and spatial density. Therefore the measurement configurations need to be optimised to get a best possible state estimate. One possibility to optimise the state estimate is provided by observation targeting of sensitive system states, to identify measurement configurations of best value for forecast improvements. In recent years, numerical weather prediction adapted singular vector analysis with respect to initial values as a novel method to identify sensitive states. In the present work, this technique is transferred from meteorological to chemical forecast. Besides initial values, emissions are investigated as controlling variables. More precisely uncertainties in the amplitude of the diurnal profile of emissions are analysed, yielding emission factors as target variables. Singular vector analysis is extended to allow for projected target variables not only at final time but also at initial time. Further, special operators are introduced, which consider the combined influence of groups of chemical species.

    As a preparation for targeted observation calculations, the concept of adaptive observations is studied with a chemistry box model. For a set of six different scenarios, the VOC versus NOx limitation of the ozone formation is investigated. Results reveal, that the singular vectors are strongly dependent on start time and length of the simulation. As expected, singular vectors with initial values as target variables tend to be more sensitive to initial values, while emission factors as target variables are more sensitive to simulation length. Further, the particular importance of chemical compounds differs strongly between absolute and relative error growth.

  15. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... allows for easy development of analyses for the abstracted systems. We briefly present one application of our approach, namely the analysis of systems for potential insider threats....

  16. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    Directory of Open Access Journals (Sweden)

    Monica Carvalho

    2013-01-01

    Full Text Available This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1 energy service demands of the hospital, (2 technical and economical characteristics of the potential technologies for installation, (3 prices of the available utilities interchanged, and (4 financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc. at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.

  17. Synthesis of trigeneration systems: sensitivity analyses and resilience.

    Science.gov (United States)

    Carvalho, Monica; Lozano, Miguel A; Ramos, José; Serra, Luis M

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.

  18. Sensitivity of surface meteorological analyses to observation networks

    Science.gov (United States)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  19. Sensitivity Analyses for Robust Causal Inference from Mendelian Randomization Analyses with Multiple Genetic Variants

    Science.gov (United States)

    Bowden, Jack; Fall, Tove; Ingelsson, Erik; Thompson, Simon G.

    2017-01-01

    Mendelian randomization investigations are becoming more powerful and simpler to perform, due to the increasing size and coverage of genome-wide association studies and the increasing availability of summarized data on genetic associations with risk factors and disease outcomes. However, when using multiple genetic variants from different gene regions in a Mendelian randomization analysis, it is highly implausible that all the genetic variants satisfy the instrumental variable assumptions. This means that a simple instrumental variable analysis alone should not be relied on to give a causal conclusion. In this article, we discuss a range of sensitivity analyses that will either support or question the validity of causal inference from a Mendelian randomization analysis with multiple genetic variants. We focus on sensitivity analyses of greatest practical relevance for ensuring robust causal inferences, and those that can be undertaken using summarized data. Aside from cases in which the justification of the instrumental variable assumptions is supported by strong biological understanding, a Mendelian randomization analysis in which no assessment of the robustness of the findings to violations of the instrumental variable assumptions has been made should be viewed as speculative and incomplete. In particular, Mendelian randomization investigations with large numbers of genetic variants without such sensitivity analyses should be treated with skepticism. PMID:27749700

  20. Non-independence and sensitivity analyses in ecological and evolutionary meta-analyses.

    Science.gov (United States)

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-01-30

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to non-independence. Non-independence can affect two major interrelated components of a meta-analysis: 1) the calculation of effect size statistics and 2) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to non-independence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with non-independent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g., inclusion of different quality data, choice of effect size) and statistical assumptions (e.g., assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of non-independence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g., impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of non-independence. To encourage better practice for dealing with non-independence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring non-independence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with non-independent study designs, and for analyzing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of non-independence in meta-analyses, leading to greater transparency and more robust conclusions. This article is protected by copyright. All rights reserved.

  1. Probabilistic and Nonprobabilistic Sensitivity Analyses of Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Sheng-En Fang

    2014-01-01

    Full Text Available Parameter sensitivity analyses have been widely applied to industrial problems for evaluating parameter significance, effects on responses, uncertainty influence, and so forth. In the interest of simple implementation and computational efficiency, this study has developed two sensitivity analysis methods corresponding to the situations with or without sufficient probability information. The probabilistic method is established with the aid of the stochastic response surface and the mathematical derivation proves that the coefficients of first-order items embody the parameter main effects on the response. Simultaneously, a nonprobabilistic interval analysis based method is brought forward for the circumstance when the parameter probability distributions are unknown. The two methods have been verified against a numerical beam example with their accuracy compared to that of a traditional variance-based method. The analysis results have demonstrated the reliability and accuracy of the developed methods. And their suitability for different situations has also been discussed.

  2. Model Independent Direct Detection Analyses

    CERN Document Server

    Fitzpatrick, A Liam; Katz, Emanuel; Lubbers, Nicholas; Xu, Yiming

    2012-01-01

    Following the construction of the general effective theory for dark matter direct detection in 1203.3542, we perform an analysis of the experimental constraints on the full parameter space of elastically scattering dark matter. We review the prescription for calculating event rates in the general effective theory and discuss the sensitivity of various experiments to additional nuclear responses beyond the spin-independent (SI) and spin-dependent (SD) couplings: an angular-momentum-dependent (LD) and spin-and-angular-momentum-dependent (LSD) response, as well as a distinction between transverse and longitudinal spin-dependent responses. We consider the effect of interference between different operators and in particular look at directions in parameter space where such cancellations lead to holes in the sensitivity of individual experiments. We explore the complementarity of different experiments by looking at the improvement of bounds when experiments are combined. Finally, our scan through parameter space sho...

  3. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  4. Sensitivity of Holocene atmospheric CO2 and the modern carbon budget to early human land use: analyses with a process-based model

    Directory of Open Access Journals (Sweden)

    F. Joos

    2011-01-01

    Full Text Available A Dynamic Global Vegetation model coupled to a simplified Earth system model is used to simulate the impact of anthropogenic land cover changes (ALCC on Holocene atmospheric CO2 and the contemporary carbon cycle. The model results suggest that early agricultural activities cannot explain the mid to late Holocene CO2 rise of 20 ppm measured on ice cores and that proposed upward revisions of Holocene ALCC imply a smaller contemporary terrestrial carbon sink. A set of illustrative scenarios is applied to test the robustness of these conclusions and to address the large discrepancies between published ALCC reconstructions. Simulated changes in atmospheric CO2 due to ALCC are less than 1 ppm before 1000 AD and 30 ppm at 2004 AD when the HYDE 3.1 ALCC reconstruction is prescribed for the past 12 000 years. Cumulative emissions of 69 GtC at 1850 and 233 GtC at 2004 AD are comparable to earlier estimates. CO2 changes due to ALCC exceed the simulated natural interannual variability only after 1000 AD. To consider evidence that land area used per person was higher before than during early industrialisation, agricultural areas from HYDE 3.1 were increased by a factor of two prior to 1700 AD (scenario H2. For the H2 scenario, the contemporary terrestrial carbon sink required to close the atmospheric CO2 budget is reduced by 0.5 GtC yr−1. Simulated CO2 remains small even in scenarios where average land use per person is increased beyond the range of published estimates. Even extreme assumptions for preindustrial land conversion and high per-capita land use do not result in simulated CO2 emissions that are sufficient to explain the magnitude and the timing of the late Holocene CO2 increase.

  5. On conditions and parameters important to model sensitivity for unsaturated flow through layered, fractured tuff; Results of analyses for HYDROCOIN [Hydrologic Code Intercomparison Project] Level 3 Case 2: Yucca Mountain Project

    Energy Technology Data Exchange (ETDEWEB)

    Prindle, R.W.; Hopkins, P.L.

    1990-10-01

    The Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive-waste repositories. This report describes the results of a study for HYDROCOIN of model sensitivity for isothermal, unsaturated flow through layered, fractured tuffs. We investigated both the types of flow behavior that dominate the performance measures and the conditions and model parameters that control flow behavior. We also examined the effect of different conceptual models and modeling approaches on our understanding of system behavior. The analyses included single- and multiple-parameter variations about base cases in one-dimensional steady and transient flow and in two-dimensional steady flow. The flow behavior is complex even for the highly simplified and constrained system modeled here. The response of the performance measures is both nonlinear and nonmonotonic. System behavior is dominated by abrupt transitions from matrix to fracture flow and by lateral diversion of flow. The observed behaviors are strongly influenced by the imposed boundary conditions and model constraints. Applied flux plays a critical role in determining the flow type but interacts strongly with the composite-conductivity curves of individual hydrologic units and with the stratigraphy. One-dimensional modeling yields conservative estimates of distributions of groundwater travel time only under very limited conditions. This study demonstrates that it is wrong to equate the shortest possible water-travel path with the fastest path from the repository to the water table. 20 refs., 234 figs., 10 tabs.

  6. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...

  7. Peer review of HEDR uncertainty and sensitivity analyses plan

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F.O.

    1993-06-01

    This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.

  8. Uncertainty and Sensitivity Analyses Plan. Draft for Peer Review: Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy`s (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  9. Sensitivity analyses of biodiesel thermo-physical properties under diesel engine conditions

    DEFF Research Database (Denmark)

    Cheng, Xinwei; Ng, Hoon Kiat; Gan, Suyin

    2016-01-01

    This reported work investigates the sensitivities of spray and soot developments to the change of thermo-physical properties for coconut and soybean methyl esters, using two-dimensional computational fluid dynamics fuel spray modelling. The choice of test fuels made was due to their contrasting...... saturation-unsaturation compositions. The sensitivity analyses for non-reacting and reacting sprays were carried out against a total of 12 thermo-physical properties, at an ambient temperature of 900 K and density of 22.8 kg/m3. For the sensitivity analyses, all the thermo-physical properties were set...... as the baseline case and each property was individually replaced by that of diesel. The significance of individual thermo-physical property was determined based on the deviations found in predictions such as liquid penetration, ignition delay period and peak soot concentration when compared to those of baseline...

  10. Challenges and Opportunities in Analysing Students Modelling

    Science.gov (United States)

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-01-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…

  11. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    Energy Technology Data Exchange (ETDEWEB)

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings.

  12. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  13. Sensitivity Analyses for Cross-Coupled Parameters in Automotive Powertrain Optimization

    Directory of Open Access Journals (Sweden)

    Pongpun Othaganont

    2014-06-01

    Full Text Available When vehicle manufacturers are developing new hybrid and electric vehicles, modeling and simulation are frequently used to predict the performance of the new vehicles from an early stage in the product lifecycle. Typically, models are used to predict the range, performance and energy consumption of their future planned production vehicle; they also allow the designer to optimize a vehicle’s configuration. Another use for the models is in performing sensitivity analysis, which helps us understand which parameters have the most influence on model predictions and real-world behaviors. There are various techniques for sensitivity analysis, some are numerical, but the greatest insights are obtained analytically with sensitivity defined in terms of partial derivatives. Existing methods in the literature give us a useful, quantified measure of parameter sensitivity, a first-order effect, but they do not consider second-order effects. Second-order effects could give us additional insights: for example, a first order analysis might tell us that a limiting factor is the efficiency of the vehicle’s prime-mover; our new second order analysis will tell us how quickly the efficiency of the powertrain will become of greater significance. In this paper, we develop a method based on formal optimization mathematics for rapid second-order sensitivity analyses and illustrate these through a case study on a C-segment electric vehicle.

  14. Sensitivity analyses of cables to suspen-dome structural system

    Institute of Scientific and Technical Information of China (English)

    高博青; 翁恩豪

    2004-01-01

    The construction of the cables is a key step for erecting suspen-dome structures. In practical engineering, it is difficult to ensure that the designed pre-stresses of cables have been exactly introduced into the structures in the site; so it is necessary to evaluate the influence of the variation of the pre-stresses on the structural behavior. In the present work, an orthogonal design method was employed to investigate the pre-stressed cables' sensitivity to the suspen-dome system. The investigation was concentrated on a Kiewitt suspen-dome. Parametric studies were carried out to study the sensitivity of the structure's static behavior, dynamic behavior, and buckling loads when the pre-stresses in the cables varied. The investigation indicated that suspen-dome structures are sensitive to the pre-stresses in all cables; and that the sensitivity depended on the location of the cables and the kind of structural behavior. Useful suggestions are given at the end of the paper.

  15. Sensitivity analyses of cables to suspen-dome structural system

    Institute of Scientific and Technical Information of China (English)

    高博青; 翁恩豪

    2004-01-01

    The construction of the cables is a key step for erecting suspen-dome structures. In practical engineering, it is difficult to ensure that the designed pre-stresses of cables have been exactly introduced into the structures in the site; so it is necessary to evaluate the influence of the variation of the pre-stresses on the structural behavior. In the present work, an orthogonal design method was employed to investigate the pre-stressed cables'sensitivity to the suspen-dome system. The investigation was concentrated on a Kiewitt suspen-dome. Parametric studies were carried out to study the sensitivity of the structure's static behavior, dynamic behavior, and buckling loads when the pre-stresses in the cables varied. The investigation indicated that suspen-dome structures are sensitive to the pre-stresses in all cables; and that the sensitivity depended on the location of the cables and the kind of structural behavior. Useful suggestions are given at the end of the paper.

  16. Structural Glycomic Analyses at High Sensitivity: A Decade of Progress

    Science.gov (United States)

    Alley, William R.; Novotny, Milos V.

    2013-06-01

    The field of glycomics has recently advanced in response to the urgent need for structural characterization and quantification of complex carbohydrates in biologically and medically important applications. The recent success of analytical glycobiology at high sensitivity reflects numerous advances in biomolecular mass spectrometry and its instrumentation, capillary and microchip separation techniques, and microchemical manipulations of carbohydrate reactivity. The multimethodological approach appears to be necessary to gain an in-depth understanding of very complex glycomes in different biological systems.

  17. On accuracy problems for semi-analytical sensitivity analyses

    DEFF Research Database (Denmark)

    Pedersen, P.; Cheng, G.; Rasmussen, John

    1989-01-01

    The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried ou...... pseudo loads in order to obtain general load equilibrium with rigid body motions. Such a method would be readily applicable for any element type, whether analytical expressions for the element stiffnesses are available or not. This topic is postponed for a future study....

  18. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate......System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...

  19. Sensitivity Assessment of Ozone Models

    Energy Technology Data Exchange (ETDEWEB)

    Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.

    2000-01-24

    The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

  20. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    Model-driven development strives to use formal artifacts during the development process. Formal artifacts enables automatic analyses of some aspects of the system under development. This serves to increase the understanding of the (intended) behavior of the system as well as increasing error...... detection and pushing error detection to earlier stages of development. The complexity of modeling and the size of systems which can be analyzed is severely limited when introducing data variables. The state space grows exponentially in the number of variable and the domain size of the variables...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...

  1. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...... on the expected impact. We demonstrate our approach on a home-payment system. The system is specifically designed to help elderly or disabled people, who may have difficulties leaving their home, to pay for some services, e.g., care-taking or rent. The payment is performed using the remote control of a television...

  2. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Science.gov (United States)

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  3. Economic modeling and sensitivity analysis.

    Science.gov (United States)

    Hay, J W

    1998-09-01

    The field of pharmacoeconomics (PE) faces serious concerns of research credibility and bias. The failure of researchers to reproduce similar results in similar settings, the inappropriate use of clinical data in economic models, the lack of transparency, and the inability of readers to make meaningful comparisons across published studies have greatly contributed to skepticism about the validity, reliability, and relevance of these studies to healthcare decision-makers. Using a case study in the field of lipid PE, two suggestions are presented for generally applicable reporting standards that will improve the credibility of PE. Health economists and researchers should be expected to provide either the software used to create their PE model or a multivariate sensitivity analysis of their PE model. Software distribution would allow other users to validate the assumptions and calculations of a particular model and apply it to their own circumstances. Multivariate sensitivity analysis can also be used to present results in a consistent and meaningful way that will facilitate comparisons across the PE literature. Using these methods, broader acceptance and application of PE results by policy-makers would become possible. To reduce the uncertainty about what is being accomplished with PE studies, it is recommended that these guidelines become requirements of both scientific journals and healthcare plan decision-makers. The standardization of economic modeling in this manner will increase the acceptability of pharmacoeconomics as a practical, real-world science.

  4. Sensitivity Study of Poisson's Ratio Used in Soil Structure Interaction (SSI) Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seung-ju [KHNP CRI, Daejeon (Korea, Republic of); You, Dong-Hyun [KEPCO Engineering and Construction, Gimcheon (Korea, Republic of); Jang, Jung-bum; Yun, Kwan-hee [KEPCO Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The preliminary review for Design Certification (DC) of APR1400 was accepted by NRC on March 4, 2015. After the acceptance of the application for standard DC of APR1400, KHNP has responded the Request for Additional Information (RAI) raised by NRC to undertake a full design certification review. Design certification is achieved through the NRC's rulemaking process, and is founded on the staff's review of the application, which addresses the various safety issues associated with the proposed nuclear power plant design, independent of a specific site. The USNRC issued RAIs pertain to Design Control Document (DCD) Ch.3.7 'Seismic Design' is DCD Tables 3.7A-1 and 3.7A-2 show Poisson’s ratios in the S1 and S2 soil profiles used for SSI analysis as great as 0.47 and 0.48 respectively. Based on staff experience, use of Poisson's ratio approaching these values may result in numerical instability of the SSI analysis results. Sensitivity study is performed using the ACS SASSI NI model of APR1400 with S1 and S2 soil profiles to demonstrate that the Poisson’s ratio values used in the SSI analyses of S1 and S2 soil profile cases do not produce numerical instabilities in the SSI analysis results. No abrupt changes or spurious peaks, which tend to indicate existence of numerical sensitivities in the SASSI solutions, appear in the computed transfer functions of the original SSI analyses that have the maximum dynamic Poisson’s ratio values of 0.47 and 0.48 as well as in the re-computed transfer functions that have the maximum dynamic Poisson’s ratio values limited to 0.42 and 0.45.

  5. Sensitivity Study of Stochastic Walking Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2010-01-01

    On flexible structures such as footbridges and long-span floors, walking loads may generate excessive structural vibrations and serviceability problems. The problem is increasing because of the growing tendency to employ long spans in structural design. In many design codes, the vibration...... serviceability limit state is assessed using a walking load model in which the walking parameters are modelled deterministically. However, the walking parameters are stochastic (for instance the weight of the pedestrian is not likely to be the same for every footbridge crossing), and a natural way forward...... investigates whether statistical distributions of bridge response are sensitive to some of the decisions made by the engineer doing the analyses. For the paper a selected part of potential influences are examined and footbridge responses are extracted using Monte-Carlo simulations and focus is on estimating...

  6. Externalizing Behaviour for Analysing System Models

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian

    Systems models have recently been introduced to model organisationsandevaluate their vulnerability to threats and especially insiderthreats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside

  7. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  8. Analysing Social Epidemics by Delayed Stochastic Models

    Directory of Open Access Journals (Sweden)

    Francisco-José Santonja

    2012-01-01

    Full Text Available We investigate the dynamics of a delayed stochastic mathematical model to understand the evolution of the alcohol consumption in Spain. Sufficient condition for stability in probability of the equilibrium point of the dynamic model with aftereffect and stochastic perturbations is obtained via Kolmanovskii and Shaikhet general method of Lyapunov functionals construction. We conclude that alcohol consumption in Spain will be constant (with stability in time with around 36.47% of nonconsumers, 62.94% of nonrisk consumers, and 0.59% of risk consumers. This approach allows us to emphasize the possibilities of the dynamical models in order to study human behaviour.

  9. Modelling, analyses and design of switching converters

    Science.gov (United States)

    Cuk, S. M.; Middlebrook, R. D.

    1978-01-01

    A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.

  10. Sensitivity studies for 3-D rod ejection analyses on axial power shape

    Energy Technology Data Exchange (ETDEWEB)

    Park, Min-Ho; Park, Jin-Woo; Park, Guen-Tae; Ryu, Seok-Hee; Um, Kil-Sup; Lee, Jae-Il [KEPCO NF, Daejeon (Korea, Republic of)

    2015-10-15

    The current safety analysis methodology using the point kinetics model combined with numerous conservative assumptions result in unrealistic prediction of the transient behavior wasting huge margin for safety analyses while the safety regulation criteria for the reactivity initiated accident are going strict. To deal with this, KNF is developing a 3-D rod ejection analysis methodology using the multi-dimensional code coupling system CHASER. The CHASER system couples three-dimensional core neutron kinetics code ASTRA, sub-channel analysis code THALES, and fuel performance analysis code FROST using message passing interface (MPI). A sensitivity study for 3-D rod ejection analysis on axial power shape (APS) is carried out to survey the tendency of safety parameters by power distributions and to build up a realistic safety analysis methodology while maintaining conservatism. The currently developing 3-D rod ejection analysis methodology using the multi-dimensional core transient analysis code system, CHASER was shown to reasonably reflect the conservative assumptions by tuning up kinetic parameters.

  11. Sensitivity, uncertainty analyses and algorithm selection for Sea Ice Thickness retrieval from Radar Altimeter

    CERN Document Server

    Djepa, Vera

    2013-01-01

    For accurate forecast of climate change, sea ice mass balance, ocean circulation and sea- atmosphere interactions is required to have long term records of Sea Ice Thickness (SIT). Different approaches have been applied to retrieve SIT and only satellite altimetry, radar or laser, have been proven to provide hemispheric estimates of SIT distribution over a sufficient thickness range. To simplify the algorithm for SIT retrieval from RA, constant ice density has been applied until now, which lead to different results for derived SIT and SID, in dependence on input information for sea ice density and snow depth. The purpose of this paper is to select algorithm for SID and SIT retrieval from RA, using statistical, sensitivity analyses and independent observations of SID from moored ULS, or on Submarine. The impact of ice density and snow depth on accuracy of the retrieved SIT has been examined, applying sensitivity analyses, and the propagated uncertainties have been summarised. Accuracy of algorithms for snow dep...

  12. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....

  13. A position-sensitive time-of-flight analyser for study of molecular photofragmentation

    CERN Document Server

    Rius-I-Riu, J; Karawajczyk, A; Winiarczyk, P

    2002-01-01

    The basic features of a simple radial position-sensitive detector design, construction and performance, are described in detail in this paper. The electronics and method used to correlate the position information from the spectrum recorded by the detector are presented. Monte Carlo simulations of the performance of the detector embedded in a time of flight analyser show that such an instrument enables kinetic energy and angular distribution measurements and triple coincidence studies of photofragmentation of simple molecules.

  14. Latent sensitization: a model for stress-sensitive chronic pain.

    Science.gov (United States)

    Marvizon, Juan Carlos; Walwyn, Wendy; Minasyan, Ani; Chen, Wenling; Taylor, Bradley K

    2015-04-01

    Latent sensitization is a rodent model of chronic pain that reproduces both its episodic nature and its sensitivity to stress. It is triggered by a wide variety of injuries ranging from injection of inflammatory agents to nerve damage. It follows a characteristic time course in which a hyperalgesic phase is followed by a phase of remission. The hyperalgesic phase lasts between a few days to several months, depending on the triggering injury. Injection of μ-opioid receptor inverse agonists (e.g., naloxone or naltrexone) during the remission phase induces reinstatement of hyperalgesia. This indicates that the remission phase does not represent a return to the normal state, but rather an altered state in which hyperalgesia is masked by constitutive activity of opioid receptors. Importantly, stress also triggers reinstatement. Here we describe in detail procedures for inducing and following latent sensitization in its different phases in rats and mice. Copyright © 2015 John Wiley & Sons, Inc.

  15. Uncertainty and sensitivity analyses in seismic risk assessments on the example of Cologne, Germany

    Science.gov (United States)

    Tyagunov, S.; Pittore, M.; Wieland, M.; Parolai, S.; Bindi, D.; Fleming, K.; Zschau, J.

    2014-06-01

    Both aleatory and epistemic uncertainties associated with different sources and components of risk (hazard, exposure, vulnerability) are present at each step of seismic risk assessments. All individual sources of uncertainty contribute to the total uncertainty, which might be very high and, within the decision-making context, may therefore lead to either very conservative and expensive decisions or the perception of considerable risk. When anatomizing the structure of the total uncertainty, it is therefore important to propagate the different individual uncertainties through the computational chain and to quantify their contribution to the total value of risk. The present study analyses different uncertainties associated with the hazard, vulnerability and loss components by the use of logic trees. The emphasis is on the analysis of epistemic uncertainties, which represent the reducible part of the total uncertainty, including a sensitivity analysis of the resulting seismic risk assessments with regard to the different uncertainty sources. This investigation, being a part of the EU FP7 project MATRIX (New Multi-Hazard and Multi-Risk Assessment Methods for Europe), is carried out for the example of, and with reference to, the conditions of the city of Cologne, Germany, which is one of the MATRIX test cases. At the same time, this particular study does not aim to revise nor to refine the hazard and risk level for Cologne; it is rather to show how large are the existing uncertainties and how they can influence seismic risk estimates, especially in less well-studied areas, if hazard and risk models adapted from other regions are used.

  16. An approach to measure parameter sensitivity in watershed hydrological modelling

    Science.gov (United States)

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  17. Criticality safety and sensitivity analyses of PWR spent nuclear fuel repository facilities

    NARCIS (Netherlands)

    Maucec, M; Glumac, B

    2005-01-01

    Monte Carlo criticality safety and sensitivity calculations of pressurized water reactor (PWR) spent nuclear fuel repository facilities for the Slovenian nuclear power plant Krsko are presented. The MCNP4C code was deployed to model and assess the neutron multiplication parameters of pool-based stor

  18. Criticality safety and sensitivity analyses of PWR spent nuclear fuel repository facilities

    NARCIS (Netherlands)

    Maucec, M; Glumac, B

    2005-01-01

    Monte Carlo criticality safety and sensitivity calculations of pressurized water reactor (PWR) spent nuclear fuel repository facilities for the Slovenian nuclear power plant Krsko are presented. The MCNP4C code was deployed to model and assess the neutron multiplication parameters of pool-based stor

  19. A Modified Sensitive Driving Cellular Automaton Model

    Institute of Scientific and Technical Information of China (English)

    GE Hong-Xia; DAI Shi-Qiang; DONG Li-Yun; LEI Li

    2005-01-01

    A modified cellular automaton model for traffic flow on highway is proposed with a novel concept about the variable security gap. The concept is first introduced into the original Nagel-Schreckenberg model, which is called the non-sensitive driving cellular automaton model. And then it is incorporated with a sensitive driving NaSch model,in which the randomization brake is arranged before the deterministic deceleration. A parameter related to the variable security gap is determined through simulation. Comparison of the simulation results indicates that the variable security gap has different influence on the two models. The fundamental diagram obtained by simulation with the modified sensitive driving NaSch model shows that the maximumflow are in good agreement with the observed data, indicating that the presented model is more reasonable and realistic.

  20. An Illumination Modeling System for Human Factors Analyses

    Science.gov (United States)

    Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)

    2002-01-01

    Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.

  1. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several

  2. Finite element model of needle electrode sensitivity

    Science.gov (United States)

    Høyum, P.; Kalvøy, H.; Martinsen, Ø. G.; Grimnes, S.

    2010-04-01

    We used the Finite Element (FE) Method to estimate the sensitivity of a needle electrode for bioimpedance measurement. This current conducting needle with insulated shaft was inserted in a saline solution and current was measured at the neutral electrode. FE model resistance and reactance were calculated and successfully compared with measurements on a laboratory model. The sensitivity field was described graphically based on these FE simulations.

  3. SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review

    Directory of Open Access Journals (Sweden)

    Andrea Achilli

    2012-01-01

    Full Text Available An Italian MSE R&D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.

  4. An approach of sensitivity and uncertainty analyses methods installation in a safety calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pepin, G.; Sallaberry, C. [Agence nationale pour la gestion des dechets radioactifs (Andra), DS/CS, 92 - Chatenay-Malabry (France)

    2003-07-01

    Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA.

  5. Sensitivity of soil moisture analyses to contrasting background and observation error scenarios

    Science.gov (United States)

    Munoz-Sabater, Joaquín; de Rosnay, Patricia; Albergel, Clément; Isaksen, Lars

    2017-04-01

    Soil moisture is a crucial variable for numerical weather prediction. Accurate, global initialization of soil moisture is obtained through data assimilation systems. However analyses depend largely on the way observations and background errors are defined. In this paper a wide range of short experiments with contrasted specification of the observation error and soil moisture background were conducted. As observations, screen-level variables and brightness temperatures from the Soil Moisture and Ocean Salinity (SMOS) mission were used. The region of interest was North America given the good availability of in-situ observations. The impact of these experiments on soil moisture and the atmospheric layer near the surface were evaluated. The results highlighted the importance of assimilating sensitive observations to soil moisture for air temperature and humidity forecasts. The benefits on the soil water content were more noticeable with increasing the SMOS observation error and with the introduction of soil texture dependency in the soil moisture background error.

  6. Modelling longevity bonds: Analysing the Swiss Re Kortis bond

    OpenAIRE

    2015-01-01

    A key contribution to the development of the traded market for longevity risk was the issuance of the Kortis bond, the world's first longevity trend bond, by Swiss Re in 2010. We analyse the design of the Kortis bond, develop suitable mortality models to analyse its payoff and discuss the key risk factors for the bond. We also investigate how the design of the Kortis bond can be adapted and extended to further develop the market for longevity risk.

  7. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  8. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...

  9. Contributions to sensitivity analysis and generalized discriminant analysis; Contributions a l'analyse de sensibilite et a l'analyse discriminante generalisee

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, J

    2005-12-15

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  10. Accelerator mass spectrometry analyses of environmental radionuclides: sensitivity, precision and standardisation

    Science.gov (United States)

    Hotchkis; Fink; Tuniz; Vogt

    2000-07-01

    Accelerator Mass Spectrometry (AMS) is the analytical technique of choice for the detection of long-lived radionuclides which cannot be practically analysed with decay counting or conventional mass spectrometry. AMS allows an isotopic sensitivity as low as one part in 10(15) for 14C (5.73 ka), 10Be (1.6 Ma), 26Al (720 ka), 36Cl (301 ka), 41Ca (104 ka), 129I (16 Ma) and other long-lived radionuclides occurring in nature at ultra-trace levels. These radionuclides can be used as tracers and chronometers in many disciplines: geology, archaeology, astrophysics, biomedicine and materials science. Low-level decay counting techniques have been developed in the last 40-50 years to detect the concentration of cosmogenic, radiogenic and anthropogenic radionuclides in a variety of specimens. Radioactivity measurements for long-lived radionuclides are made difficult by low counting rates and in some cases the need for complicated radiochemistry procedures and efficient detectors of soft beta-particles and low energy x-rays. The sensitivity of AMS is unaffected by the half-life of the isotope being measured, since the atoms not the radiations that result from their decay, are counted directly. Hence, the efficiency of AMS in the detection of long-lived radionuclides is 10(6)-10(9) times higher than decay counting and the size of the sample required for analysis is reduced accordingly. For example, 14C is being analysed in samples containing as little as 20 microg carbon. There is also a world-wide effort to use AMS for the analysis of rare nuclides of heavy mass, such as actinides, with important applications in safeguards and nuclear waste disposal. Finally, AMS microprobes are being developed for the in-situ analysis of stable isotopes in geological samples, semiconductors and other materials. Unfortunately, the use of AMS is limited by the expensive accelerator technology required, but there are several attempts to develop compact AMS spectrometers at low (< or = 0.5 MV

  11. Sensitivity analysis of periodic matrix population models.

    Science.gov (United States)

    Caswell, Hal; Shyu, Esther

    2012-12-01

    Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments.

  12. Uncertainty and Sensitivity in Surface Dynamics Modeling

    Science.gov (United States)

    Kettner, Albert J.; Syvitski, James P. M.

    2016-05-01

    Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.

  13. Sensitivity analysis of the age-structured malaria transmission model

    Science.gov (United States)

    Addawe, Joel M.; Lope, Jose Ernie C.

    2012-09-01

    We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.

  14. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  15. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  16. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  17. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  18. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    Directory of Open Access Journals (Sweden)

    Gerhard Strydom

    2013-01-01

    Full Text Available The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC transient PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS or Latin Hypercube Sampling (LHS data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.

  19. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  20. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  1. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  2. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  3. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  4. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  5. Analyses of single nucleotide polymorphisms in selected nutrient-sensitive genes in weight-regain prevention

    DEFF Research Database (Denmark)

    Larsen, Lesli Hingstrup; Ängquist, Lars Henrik; Vimaleswaran, Karani S

    2012-01-01

    Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes.......Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes....

  6. Sensitivity of Footbridge Response to Load Modeling

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many desig...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....

  7. Sensitivity of Footbridge Response to Load Modeling

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2012-01-01

    The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many design...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....

  8. Sensitivity of Footbridge Response to Load Modeling

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many design...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....

  9. Sensitivity analyses of OH missing sinks over Tokyo metropolitan area in the summer of 2007

    Directory of Open Access Journals (Sweden)

    S. Chatani

    2009-09-01

    Full Text Available OH reactivity is one of key indicators which reflect impacts of photochemical reactions in the atmosphere. An observation campaign has been conducted in the summer of 2007 at the heart of Tokyo metropolitan area to measure OH reactivity. The total OH reactivity measured directly by the laser-induced pump and probe technique was higher than the sum of the OH reactivity calculated from concentrations and reaction rate coefficients of individual species measured in this campaign. And then, three-dimensional air quality simulation has been conducted to evaluate the simulation performance on the total OH reactivity including "missing sinks", which correspond to the difference between the measured and calculated total OH reactivity. The simulated OH reactivity is significantly underestimated because the OH reactivity of volatile organic compounds (VOCs and missing sinks are underestimated. When scaling factors are applied to input emissions and boundary concentrations, a good agreement is observed between the simulated and measured concentrations of VOCs. However, the simulated OH reactivity of missing sinks is still underestimated. Therefore, impacts of unidentified missing sinks are investigated through sensitivity analyses. In the cases that unknown secondary products are assumed to account for unidentified missing sinks, they tend to suppress formation of secondary aerosol components and enhance formation of ozone. In the cases that unidentified primary emitted species are assumed to account for unidentified missing sinks, a variety of impacts may be observed, which could serve as precursors of secondary organic aerosols (SOA and significantly increase SOA formation. Missing sinks are considered to play an important role in the atmosphere over Tokyo metropolitan area.

  10. Sensitivity analyses of OH missing sinks over Tokyo metropolitan area in the summer of 2007

    Directory of Open Access Journals (Sweden)

    K. Ishii

    2009-11-01

    Full Text Available OH reactivity is one of key indicators which reflect impacts of photochemical reactions in the atmosphere. An observation campaign has been conducted in the summer of 2007 at the heart of Tokyo metropolitan area to measure OH reactivity. The total OH reactivity measured directly by the laser-induced pump and probe technique was higher than the sum of the OH reactivity calculated from concentrations and reaction rate coefficients of individual species measured in this campaign. And then, three-dimensional air quality simulation has been conducted to evaluate the simulation performance on the total OH reactivity including "missing sinks", which correspond to the difference between the measured and calculated total OH reactivity. The simulated OH reactivity is significantly underestimated because the OH reactivity of volatile organic compounds (VOCs and missing sinks are underestimated. When scaling factors are applied to input emissions and boundary concentrations, a good agreement is observed between the simulated and measured concentrations of VOCs. However, the simulated OH reactivity of missing sinks is still underestimated. Therefore, impacts of unidentified missing sinks are investigated through sensitivity analyses. In the cases that unknown secondary products are assumed to account for unidentified missing sinks, they tend to suppress formation of secondary aerosol components and enhance formation of ozone. In the cases that unidentified primary emitted species are assumed to account for unidentified missing sinks, a variety of impacts may be observed, which could serve as precursors of secondary organic aerosols (SOA and significantly increase SOA formation. Missing sinks are considered to play an important role in the atmosphere over Tokyo metropolitan area.

  11. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  12. Uncertainty and sensitivity analyses for gas and brine migration at the Waste Isolation Pilot Plant, May 1992

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States); Bean, J.E. [New Mexico Engineering Research Inst., Albuquerque, NM (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States); Garner, J.W.; Vaughn, P. [Applied Physics, Inc., Albuquerque, NM (United States); Schreiber, J.D. [Science Applications International Corp., Albuquerque, NM (United States); Swift, P.N. [Tech Reps, Inc., Albuquerque, NM (United States)

    1993-08-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability.

  13. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2003-06-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30–60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg(O3 from 273 to 299 Tg(O(3. Thus, there is a spread of ±35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  14. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  15. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  16. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  17. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  18. Improved environmental multimedia modeling and its sensitivity analysis.

    Science.gov (United States)

    Yuan, Jing; Elektorowicz, Maria; Chen, Zhi

    2011-01-01

    Modeling of multimedia environmental issues is extremely complex due to the intricacy of the systems with the consideration of many factors. In this study, an improved environmental multimedia modeling is developed and a number of testing problems related to it are examined and compared with each other with standard numerical and analytical methodologies. The results indicate the flux output of new model is lesser in the unsaturated zone and groundwater zone compared with the traditional environmental multimedia model. Furthermore, about 90% of the total benzene flux was distributed to the air zone from the landfill sources and only 10% of the total flux emitted into the unsaturated, groundwater zones in non-uniform conditions. This paper also includes functions of model sensitivity analysis to optimize model parameters such as Peclet number (Pe). The analyses results show that the Pe can be considered as deterministic input variables for transport output. The oscillatory behavior is eliminated with the Pe decreased. In addition, the numerical methods are more accurate than analytical methods with the Pe increased. In conclusion, the improved environmental multimedia model system and its sensitivity analysis can be used to address the complex fate and transport of the pollutants in multimedia environments and then help to manage the environmental impacts.

  19. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.

  20. Uncertainty and sensitivity analyses in seismic risk assessments on the example of Cologne, Germany

    Directory of Open Access Journals (Sweden)

    S. Tyagunov

    2013-12-01

    Full Text Available Both aleatory and epistemic uncertainties associated with different sources and components of risk (hazard, exposure, vulnerability are present at each step of seismic risk assessments. All individual sources of uncertainty contribute to the total uncertainty, which might be very high and, within the decision-making context, may therefore lead to either very conservative and expensive decisions or the perception of considerable risk. When anatomizing the structure of the total uncertainty, it is therefore important to propagate the different individual uncertainties through the computational chain and to quantify their contribution to the total value of risk. The present study analyzes different uncertainties associated with the hazard, vulnerability and loss components by the use of logic trees. The emphasis is on the analysis of epistemic uncertainties, which represent the reducible part of the total uncertainty, including a sensitivity analysis of the resulting seismic risk assessments with regards to the different uncertainty sources. This investigation, being a part of the EU FP7 project MATRIX (New Multi-Hazard and Multi-Risk Assessment Methods for Europe, is carried out for the example of, and with reference to, the conditions of the city of Cologne, Germany, which is one of the MATRIX test cases. At the same time, this particular study does not aim to revise nor to refine the hazard and risk level for Cologne; it is rather to show how large are the existing uncertainties and how they can influence seismic risk estimates, especially in less well-studied areas, if hazard and risk models adapted from other regions are used.

  1. Precipitates/Salts Model Sensitivity Calculation

    Energy Technology Data Exchange (ETDEWEB)

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  2. Sensitivity analysis of fine sediment models using heterogeneous data

    Science.gov (United States)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  3. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  4. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  5. Pre-waste-emplacement ground-water travel time sensitivity and uncertainty analyses for Yucca Mountain, Nevada; Yucca Mountain Site Characterization Project

    Energy Technology Data Exchange (ETDEWEB)

    Kaplan, P.G.

    1993-01-01

    Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain.

  6. Animal models to study gluten sensitivity.

    Science.gov (United States)

    Marietta, Eric V; Murray, Joseph A

    2012-07-01

    The initial development and maintenance of tolerance to dietary antigens is a complex process that, when prevented or interrupted, can lead to human disease. Understanding the mechanisms by which tolerance to specific dietary antigens is attained and maintained is crucial to our understanding of the pathogenesis of diseases related to intolerance of specific dietary antigens. Two diseases that are the result of intolerance to a dietary antigen are celiac disease (CD) and dermatitis herpetiformis (DH). Both of these diseases are dependent upon the ingestion of gluten (the protein fraction of wheat, rye, and barley) and manifest in the gastrointestinal tract and skin, respectively. These gluten-sensitive diseases are two examples of how devastating abnormal immune responses to a ubiquitous food can be. The well-recognized risk genotype for both is conferred by either of the HLA class II molecules DQ2 or DQ8. However, only a minority of individuals who carry these molecules will develop either disease. Also of interest is that the age at diagnosis can range from infancy to 70-80 years of age. This would indicate that intolerance to gluten may potentially be the result of two different phenomena. The first would be that, for various reasons, tolerance to gluten never developed in certain individuals, but that for other individuals, prior tolerance to gluten was lost at some point after childhood. Of recent interest is the concept of non-celiac gluten sensitivity, which manifests as chronic digestive or neurologic symptoms due to gluten, but through mechanisms that remain to be elucidated. This review will address how animal models of gluten-sensitive disorders have substantially contributed to a better understanding of how gluten intolerance can arise and cause disease.

  7. Animal Models to Study Gluten Sensitivity1

    Science.gov (United States)

    Marietta, Eric V.; Murray, Joseph A.

    2012-01-01

    The initial development and maintenance of tolerance to dietary antigens is a complex process that, when prevented or interrupted, can lead to human disease. Understanding the mechanisms by which tolerance to specific dietary antigens is attained and maintained is crucial to our understanding of the pathogenesis of diseases related to intolerance of specific dietary antigens. Two diseases that are the result of intolerance to a dietary antigen are celiac disease (CD) and dermatitis herpetiformis (DH). Both of these diseases are dependent upon the ingestion of gluten (the protein fraction of wheat, rye, and barley) and manifest in the gastrointestinal tract and skin, respectively. These gluten-sensitive diseases are two examples of how devastating abnormal immune responses to a ubiquitous food can be. The well-recognized risk genotype for both is conferred by either of the HLA class II molecules DQ2 or DQ8. However, only a minority of individuals who carry these molecules will develop either disease. Also of interest is that the age at diagnosis can range from infancy to 70–80 years of age. This would indicate that intolerance to gluten may potentially be the result of two different phenomena. The first would be that, for various reasons, tolerance to gluten never developed in certain individuals, but that for other individuals, prior tolerance to gluten was lost at some point after childhood. Of recent interest is the concept of non-celiac gluten sensitivity, which manifests as chronic digestive or neurologic symptoms due to gluten, but through mechanisms that remain to be elucidated. This review will address how animal models of gluten-sensitive disorders have substantially contributed to a better understanding of how gluten intolerance can arise and cause disease. PMID:22572887

  8. Analysing regenerative potential in zebrafish models of congenital muscular dystrophy.

    Science.gov (United States)

    Wood, A J; Currie, P D

    2014-11-01

    The congenital muscular dystrophies (CMDs) are a clinically and genetically heterogeneous group of muscle disorders. Clinically hypotonia is present from birth, with progressive muscle weakness and wasting through development. For the most part, CMDs can mechanistically be attributed to failure of basement membrane protein laminin-α2 sufficiently binding with correctly glycosylated α-dystroglycan. The majority of CMDs therefore arise as the result of either a deficiency of laminin-α2 (MDC1A) or hypoglycosylation of α-dystroglycan (dystroglycanopathy). Here we consider whether by filling a regenerative medicine niche, the zebrafish model can address the present challenge of delivering novel therapeutic solutions for CMD. In the first instance the readiness and appropriateness of the zebrafish as a model organism for pioneering regenerative medicine therapies in CMD is analysed, in particular for MDC1A and the dystroglycanopathies. Despite the recent rapid progress made in gene editing technology, these approaches have yet to yield any novel zebrafish models of CMD. Currently the most genetically relevant zebrafish models to the field of CMD, have all been created by N-ethyl-N-nitrosourea (ENU) mutagenesis. Once genetically relevant models have been established the zebrafish has several important facets for investigating the mechanistic cause of CMD, including rapid ex vivo development, optical transparency up to the larval stages of development and relative ease in creating transgenic reporter lines. Together, these tools are well suited for use in live-imaging studies such as in vivo modelling of muscle fibre detachment. Secondly, the zebrafish's contribution to progress in effective treatment of CMD was analysed. Two approaches were identified in which zebrafish could potentially contribute to effective therapies. The first hinges on the augmentation of functional redundancy within the system, such as upregulating alternative laminin chains in the candyfloss

  9. [Approach to depressogenic genes from genetic analyses of animal models].

    Science.gov (United States)

    Yoshikawa, Takeo

    2004-01-01

    Human depression or mood disorder is defined as a complex disease, making positional cloning of susceptibility genes a formidable task. We have undertaken genetic analyses of three different animal models for depression, comparing our results with advanced database resources. We first performed quantitative trait loci (QTL) analysis on two mouse models of "despair", namely, the forced swim test (FST) and tail suspension test (TST), and detected multiple chromosomal loci that control immobility time in these tests. Since one QTL detected on mouse chromosome 11 harbors the GABA A receptor subunit genes, we tested these genes for association in human mood disorder patients. We obtained significant associations of the alpha 1 and alpha 6 subunit genes with the disease, particularly in females. This result was striking, because we had previously detected an epistatic interaction between mouse chromosomes 11 and X that regulates immobility time in these animals. Next, we performed genome-wide expression analyses using a rat model of depression, learned helplessness (LH). We found that in the frontal cortex of LH rats, a disease implicated region, the LIM kinase 1 gene (Limk 1) showed greatest alteration, in this case down-regulation. By combining data from the QTL analysis of FST/TST and DNA microarray analysis of mouse frontal cortex, we identified adenylyl cyclase-associated CAP protein 1 (Cap 1) as another candidate gene for depression susceptibility. Both Limk 1 and Cap 1 are key players in the modulation of actin G-F conversion. In summary, our current study using animal models suggests disturbances of GABAergic neurotransmission and actin turnover as potential pathophysiologies for mood disorder.

  10. Structure-activity models for contact sensitization.

    Science.gov (United States)

    Fedorowicz, Adam; Singh, Harshinder; Soderholm, Sidney; Demchuk, Eugene

    2005-06-01

    Allergic contact dermatitis (ACD) is a widespread cause of workers' disabilities. Although some substances found in the workplace are rigorously tested, the potential of the vast majority of chemicals to cause skin sensitization remains unknown. At the same time, exhaustive testing of all chemicals in workplaces is costly and raises ethical concerns. New approaches to developing information for risk assessment based on computational (quantitative) structure-activity relationship [(Q)SAR] methods may be complementary to and reduce the need for animal testing. Virtually any number of existing, de novo, and even preconceived compounds can be screened in silico at a fraction of the cost of animal testing. This work investigates the utility of ACD (Q)SAR modeling from the occupational health perspective using two leading software products, DEREK for Windows and TOPKAT, and an original method based on logistic regression methodology. It is found that the correct classification of (Q)SAR predictions for guinea pig data achieves values of 73.3, 82.9, and 87.6% for TOPKAT, DEREK for Windows, and the logistic regression model, respectively. The correct classification using LLNA data equals 73.0 and 83.2% for DEREK for Windows and the logistic regression model, respectively.

  11. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-02-01

    Full Text Available Before operational use or for decision making, models must be validated, and the degree of trust in model outputs should be quantified. Often, model validation is performed at single locations due to the lack of spatially-distributed data. Since the analysis of parametric model uncertainties can be performed independently of observations, it is a suitable method to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainty of a physically-based mountain permafrost model are quantified within an artificial topography consisting of different elevations and exposures combined with six ground types characterized by their hydraulic properties. The analyses performed for all combinations of topographic factors and ground types allowed to quantify the variability of model sensitivity and uncertainty within mountain regions. We found that modeled snow duration considerably influences the mean annual ground temperature (MAGT. The melt-out day of snow (MD is determined by processes determining snow accumulation and melting. Parameters such as the temperature and precipitation lapse rate and the snow correction factor have therefore a great impact on modeled MAGT. Ground albedo changes MAGT from 0.5 to 4°C in dependence of the elevation, the aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter snow cover. Snow albedo and other parameters determining the amount of reflected solar radiation are important, changing MAGT at different depths by more than 1°C. Parameters influencing the turbulent fluxes as the roughness length or the dew temperature are more sensitive at low elevation sites due to higher air temperatures and decreased solar radiation. Modeling the individual terms of the energy

  12. Sensitivity Analyses of Site Selection for a Concrete Batch Plant at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.P.

    2001-07-10

    A site selection study was conducted to evaluate locations for an onsite concrete batch plant to support the construction of the proposed surplus plutonium disposition facilities at the Savannah River site. Presented in this report is a sensitivity analysis that demonstrates the robustness of the site evaluations.

  13. Magnetic fabric analyses in analogue models of clays

    Science.gov (United States)

    García-Lasanta, Cristina; Román-Berdiel, Teresa; Izquierdo-Llavall, Esther; Casas-Sainz, Antonio

    2017-04-01

    Anisotropy of magnetic susceptibility (AMS) studies in sedimentary rocks subjected to deformation indicate that magnetic fabrics orientation can be conditioned by multiple factors: sedimentary conditions, magnetic mineralogy, successive tectonic events, etc. All of them difficult the interpretation of the AMS as a marker of the deformation conditions. Analogue modeling allows to isolate the variables that act in a geological process and to determine the factors and in which extent they influence in the process. This study shows the magnetic fabric analyses applied to several analogue models developed with common commercial red clays. This material resembles natural clay materials that, despite their greater degree of impurities and heterogeneity, have been proved to record a robust magnetic signal carried by a mixture of para- and ferromagnetic minerals. The magnetic behavior of the modeled clay has been characterized by temperature dependent magnetic susceptibility curves (from 40 to 700°C). The measurements were performed combining a KLY-3S Kappabridge susceptometer with a CS3 furnace (AGICO Inc., Czech Republic). The obtained results indicate the presence of an important content of hematite as ferromagnetic phase, as well as a remarkable paramagnetic fraction, probably constituted by phyllosilicates. This mineralogy is common in natural materials such as Permo-Triassic red facies, and magnetic fabric analyses in these natural examples have given consistent results in different tectonic contexts. In this study, sedimentary conditions and magnetic mineralogy are kept constant and the influence of the tectonic regime in the magnetic fabrics is analyzed. Our main objective is to reproduce several tectonic contexts (strike-slip and compression) in a sedimentary environment where material is not yet compacted, in order to determine how tectonic conditions influence the magnetic fabric registered in each case. By dispersing the clays in water and after allowing their

  14. Sensitivity model study of regional mercury dispersion in the atmosphere

    Science.gov (United States)

    Gencarelli, Christian N.; Bieser, Johannes; Carbone, Francesco; De Simone, Francesco; Hedgecock, Ian M.; Matthias, Volker; Travnikov, Oleg; Yang, Xin; Pirrone, Nicola

    2017-01-01

    Atmospheric deposition is the most important pathway by which Hg reaches marine ecosystems, where it can be methylated and enter the base of food chain. The deposition, transport and chemical interactions of atmospheric Hg have been simulated over Europe for the year 2013 in the framework of the Global Mercury Observation System (GMOS) project, performing 14 different model sensitivity tests using two high-resolution three-dimensional chemical transport models (CTMs), varying the anthropogenic emission datasets, atmospheric Br input fields, Hg oxidation schemes and modelling domain boundary condition input. Sensitivity simulation results were compared with observations from 28 monitoring sites in Europe to assess model performance and particularly to analyse the influence of anthropogenic emission speciation and the Hg0(g) atmospheric oxidation mechanism. The contribution of anthropogenic Hg emissions, their speciation and vertical distribution are crucial to the simulated concentration and deposition fields, as is also the choice of Hg0(g) oxidation pathway. The areas most sensitive to changes in Hg emission speciation and the emission vertical distribution are those near major sources, but also the Aegean and the Black seas, the English Channel, the Skagerrak Strait and the northern German coast. Considerable influence was found also evident over the Mediterranean, the North Sea and Baltic Sea and some influence is seen over continental Europe, while this difference is least over the north-western part of the modelling domain, which includes the Norwegian Sea and Iceland. The Br oxidation pathway produces more HgII(g) in the lower model levels, but overall wet deposition is lower in comparison to the simulations which employ an O3 / OH oxidation mechanism. The necessity to perform continuous measurements of speciated Hg and to investigate the local impacts of Hg emissions and deposition, as well as interactions dependent on land use and vegetation, forests, peat

  15. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  16. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  17. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  18. Incorporating flood event analyses and catchment structures into model development

    Science.gov (United States)

    Oppel, Henning; Schumann, Andreas

    2016-04-01

    The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new

  19. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  20. Analyses of Creepages and Their Sensitivities for a Single Wheelset Moving on a Tangent Track

    Institute of Scientific and Technical Information of China (English)

    Jin Xuesong; Zhang Weihua

    1996-01-01

    Creep forces depend greatly on creepages in the contact area forming between wheel and rail. The creepages are completely determined by the state of a wheelset moving on a track. In this paper the contact state of a single rigid wheelset moving on a tangent rigid rail ,creepages and their sensitivities to some parameters of contact geometry are analyzed by semi-analytical method and numerical method, respectively. Some important ideas will be provided for the studies done on the interactions between wheels and rails at high speed.

  1. Chromatographic air analyser microsystem for the selective and sensitive detection of atmospheric pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Jean-Baptiste; Lahlou, Houda; Mohsen, Yehya; Berger, Franck [Laboratoire de Chimie Physique et Rayonnements, Alain Chambaudet, UMR CEA E4 UFR ST, Universite de Franche Comte, 25000 Besancon (France); Vilanova, Xavier; Correig, Xavier, E-mail: jbsanche@univ-fcomte.fr [Departament d' Enginyeria Electronica, Electrica i Automatica, Universitat Rovira i Virgili, Paisos Catalans 26, 43007, Tarragona (Spain)

    2011-08-17

    The development of industry and automotive trafic produces Volatile Organic Compounds (VOCs) whose toxicity can affect seriously human health and environment. The level of those contaminants in air must be as low as possible. In this context, there is a need for in situ systems that could monitor selectively the concentration of these compounds. The aim of this study is to demonstrate the efficiency of a system build with a pre-concentrator, a chromatographic micro-column and a tin oxide-based gas sensor for the selective and sensitive detection of atmospheric pollutants. In particular, this study is focused on the selective detection of benzene and 1,3 butadiene.

  2. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  3. Economic modeling of electricity production from hot dry rock geothermal reservoirs: methodology and analyses. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, R.G.; Morris, G.E.

    1979-09-01

    An analytical methodology is developed for assessing alternative modes of generating electricity from hot dry rock (HDR) geothermal energy sources. The methodology is used in sensitivity analyses to explore relative system economics. The methodology used a computerized, intertemporal optimization model to determine the profit-maximizing design and management of a unified HDR electric power plant with a given set of geologic, engineering, and financial conditions. By iterating this model on price, a levelized busbar cost of electricity is established. By varying the conditions of development, the sensitivity of both optimal management and busbar cost to these conditions are explored. A plausible set of reference case parameters is established at the outset of the sensitivity analyses. This reference case links a multiple-fracture reservoir system to an organic, binary-fluid conversion cycle. A levelized busbar cost of 43.2 mills/kWh ($1978) was determined for the reference case, which had an assumed geothermal gradient of 40/sup 0/C/km, a design well-flow rate of 75 kg/s, an effective heat transfer area per pair of wells of 1.7 x 10/sup 6/ m/sup 2/, and plant design temperature of 160/sup 0/C. Variations in the presumed geothermal gradient, size of the reservoir, drilling costs, real rates of return, and other system parameters yield minimum busbar costs between -40% and +76% of the reference case busbar cost.

  4. Stochastic methods for the quantification of sensitivities and uncertainties in criticality analyses; Stochastische Methoden zur Quantifizierung von Sensitivitaeten und Unsicherheiten in Kritikalitaetsanalysen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Bock, Matthias; Stuke, Maik; Wagner, Markus

    2014-06-15

    This work describes statistical analyses based on Monte Carlo sampling methods for criticality safety analyses. The methods analyse a large number of calculations of a given problem with statistically varied model parameters to determine uncertainties and sensitivities of the computed results. The GRS development SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is a modular, easily extensible abstract interface program, designed to perform such Monte Carlo sampling based uncertainty and sensitivity analyses in the field of criticality safety. It couples different criticality and depletion codes commonly used in nuclear criticality safety assessments to the well-established GRS tool SUSA for sensitivity and uncertainty analyses. For uncertainty analyses of criticality calculations, SunCISTT couples various SCALE sequences developed at Oak Ridge National Laboratory and the general Monte Carlo N-particle transport code MCNP from Los Alamos National Laboratory to SUSA. The impact of manufacturing tolerances of a fuel assembly configuration on the neutron multiplication factor for the various sequences is shown. Uncertainties in nuclear inventories, dose rates, or decay heat can be investigated via the coupling of the GRS depletion system OREST to SUSA. Some results for a simplified irradiated Pressurized Water Reactor (PWR) UO{sub 2} fuel assembly are shown. SUnCISTT also combines the two aforementioned modules for burnup credit criticality analysis of spent nuclear fuel to ensures an uncertainty and sensitivity analysis using the variations of manufacturing tolerances in the burn-up code and criticality code simultaneously. Calculations and results for a storage cask loaded with typical irradiated PWR UO{sub 2} fuel are shown, including Monte Carlo sampled axial burn-up profiles. The application of SUnCISTT in the field of code validation, specifically, how it is applied to compare a simulation model to available benchmark

  5. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    Science.gov (United States)

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  6. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  7. Scoping and sensitivity analyses for the Demonstration Tokamak Hybrid Reactor (DTHR)

    Energy Technology Data Exchange (ETDEWEB)

    Sink, D.A.; Gibson, G.

    1979-03-01

    The results of an extensive set of parametric studies are presented which provide analytical data of the effects of various tokamak parameters on the performance and cost of the DTHR (Demonstration Tokamak Hybrid Reactor). The studies were centered on a point design which is described in detail. Variations in the device size, neutron wall loading, and plasma aspect ratio are presented, and the effects on direct hardware costs, fissile fuel production (breeding), fusion power production, electrical power consumption, and thermal power production are shown graphically. The studies considered both ignition and beam-driven operations of DTHR and yielded results based on two empirical scaling laws presently used in reactor studies. Sensitivity studies were also made for variations in the following key parameters: the plasma elongation, the minor radius, the TF coil peak field, the neutral beam injection power, and the Z/sub eff/ of the plasma.

  8. Earth system sensitivity inferred from Pliocene modelling and data

    Science.gov (United States)

    Lunt, D.J.; Haywood, A.M.; Schmidt, G.A.; Salzmann, U.; Valdes, P.J.; Dowsett, H.J.

    2010-01-01

    Quantifying the equilibrium response of global temperatures to an increase in atmospheric carbon dioxide concentrations is one of the cornerstones of climate research. Components of the Earths climate system that vary over long timescales, such as ice sheets and vegetation, could have an important effect on this temperature sensitivity, but have often been neglected. Here we use a coupled atmosphere-ocean general circulation model to simulate the climate of the mid-Pliocene warm period (about three million years ago), and analyse the forcings and feedbacks that contributed to the relatively warm temperatures. Furthermore, we compare our simulation with proxy records of mid-Pliocene sea surface temperature. Taking these lines of evidence together, we estimate that the response of the Earth system to elevated atmospheric carbon dioxide concentrations is 30-50% greater than the response based on those fast-adjusting components of the climate system that are used traditionally to estimate climate sensitivity. We conclude that targets for the long-term stabilization of atmospheric greenhouse-gas concentrations aimed at preventing a dangerous human interference with the climate system should take into account this higher sensitivity of the Earth system. ?? 2010 Macmillan Publishers Limited. All rights reserved.

  9. A discourse on sensitivity analysis for discretely-modeled structures

    Science.gov (United States)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  10. High-throughput, Highly Sensitive Analyses of Bacterial Morphogenesis Using Ultra Performance Liquid Chromatography.

    Science.gov (United States)

    Desmarais, Samantha M; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D; de Pedro, Miguel A; Huang, Kerwyn Casey

    2015-12-25

    The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination.

  11. IASI's sensitivity to near-surface carbon monoxide (CO): Theoretical analyses and retrievals on test cases

    Science.gov (United States)

    Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François

    2017-03-01

    Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.

  12. Sensitivity analyses of the theoretical equations used in point velocity probe (PVP) data interpretation

    Science.gov (United States)

    Devlin, J. F.

    2016-09-01

    Point velocity probes (PVPs) are dedicated, relatively low-cost instruments for measuring groundwater speed and direction in non-cohesive, unconsolidated porous media aquifers. They have been used to evaluate groundwater velocity in groundwater treatment zones, glacial outwash aquifers, and within streambanks to assist with the assessment of groundwater-surfaced water exchanges. Empirical evidence of acceptable levels of uncertainty for these applications has come from both laboratory and field trials. This work extends previous assessments of the method by examining the inherent uncertainties arising from the equations used to interpret PVP datasets. PVPs operate by sensing tracer movement on the probe surface, producing apparent velocities from two detectors. Sensitivity equations were developed for the estimation of groundwater speed, v∞, and flow direction, α, as a function of the apparent velocities of water on the probe surface and the α angle itself. The resulting estimations of measurement uncertainty, which are inherent limitations of the method, apply to idealized, homogeneous porous media, which on the local scale of a PVP measurement may be approached. This work does not address experimental sources of error that may arise from the presence of cohesive sediments that prevent collapse around the probe, the effects of centimeter-scale aquifer heterogeneities, or other complications related to borehole integrity or operator error, which could greatly exceed the inherent sources of error. However, the findings reported here have been shown to be in agreement with the previous empirical work. On this basis, properly installed and functioning PVPs should be expected to produce estimates of groundwater speed with uncertainties less than ± 15%, with the most accurate values of groundwater speed expected when horizontal flow is incident on the probe surface at about 50° from the active injection port. Directions can be measured with uncertainties less than

  13. Feasibility for development of a nuclear reactor pressure vessel flaw distribution: Sensitivity analyses and NDE (nondestructive evaluation) capability

    Energy Technology Data Exchange (ETDEWEB)

    Rosinski, S.T. (Sandia National Labs., Albuquerque, NM (USA)); Kennedy, E.L.; Foulds, J.R. (Failure Analysis Associates, Inc., Menlo Park, CA (USA))

    1990-01-01

    Pressurized water reactor pressure vessels operate under US Nuclear Regulatory Commission (NRC) rules and regulatory guides that are intended to maintain a low probability of vessel failure. The NRC has also addressed neutron embrittlement of pressurized water reactor pressure vessels by imposing regulations on plant operation. Plants failing to meet the operating criteria specified by these rules and regulations are required, among other things, to analytically demonstrate fitness for service in order to continue safe operation. The initial flaw size or distribution of initial vessel flaws is a key input to the required vessel integrity analyses. A fracture mechanics sensitivity study was performed to quantify the effect of the assumed flaw distribution on the predicted vessel performance under a specified pressurized thermal shock transient and to determine the critical crack size. Results of the analysis indicate that vessel performance in terms of the estimated probability of failure is very sensitive to the assumed flaw distribution. 20 refs., 3 figs., 2 tabs.

  14. Sensitivity of resource selection and connectivity models to landscape definition

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2017-01-01

    Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...

  15. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  16. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    Science.gov (United States)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage

  17. Multi-Objective Sensitivity Analyses for Power Generation Mix: Malaysia Case Study

    Directory of Open Access Journals (Sweden)

    Siti Mariam Mohd Shokri

    2017-08-01

    Full Text Available This paper presents an optimization framework to determine long-term optimal generation mix for Malaysia Power Sector using Dynamic Programming (DP technique. Several new candidate units with a pre-defined MW capacity were included in the model for generation expansion planning from coal, natural gas, hydro and renewable energy (RE. Four objective cases were considered, 1 economic cost, 2 environmental, 3 reliability and 4 multi-objectives that combining the three cases. Results show that Malaysia optimum generation mix in 2030 for, 1 economic case is 48% from coal, 41% from gas, 3% from hydro and 8% from RE, 2 environmental case is 19% from coal, 58% from gas, 11% from hydro and 12% from RE, 3 for reliability case is 64% from coal, 32% from gas, 3% from hydro and 1% from RE and 4 multi-objective case is 49% from coal, 41% from gas, 7% from hydro and 3% from RE. The findings of this paper are the optimum generation mix for Malaysia from 2013 to 2030 which is less expensive, substantially reduce carbon emission and that less risky.  

  18. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forces with the same observed Sea Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The large-scale tropical inter-annual variability is analysed to give a picture of a skill of each model and of some sort of combination of the three models. To analyse the similarity of model response averaged over the same key regions, several widely-used indices are calculated: Southern Oscillation Index (SOI), large-scale wind shear indices of the boreal summer monsoon in Asia and West Africa and rainfall indices for NE Brazil, Sahel and India. Even for the indices where internal noise is large, some years are consistent amongst all the runs, suggesting inter-annual variability of the strength of SST forcing. Averaging the ensemble mean of the three models (the super-ensemble mean) yields improved skill. When each run is weighted according to its skill, taking three runs from different models instead of three runs of the same model improves the mean skill. There is also some indication that one run of a given model could be better than another, suggesting that persistent anomalies could change its sensitivity to SST. The index approach lacks flexibility to assess whether a model's response to SST has been geographically displaced. It can focus on the first mode in the global tropics, found through singular value decomposition analysis, which is clearly related to El Nino/Southern Oscillation (ENSO) in all seasons. The Observed-Model and Model-Model analyses lead to almost the same patterns, suggesting that the dominant pattern of model response is also the most skilful mode. Seasonal modulation of both skill and spatial patterns (both model and observed) clearly exists with highest skill

  19. Analyses on Four Models and Cases of Enterprise Informatization

    Institute of Scientific and Technical Information of China (English)

    Shi Chunsheng(石春生); Han Xinjuan; Yang Cuilan; Zhao Dongbai

    2003-01-01

    The basic conditions of the enterprise informatization in Heilongjiang province are analyzed and 4 models are designed to drive the industrial and commercial information enterprise. The 4 models are the Resource Integration Informatization Model, the Flow Management Informatization Model, the Intranet E-commerce Informatization Model and the Network Enterprise Informatization Model. The conditions for using and problems needing attentions of these 4 models are also analyzed.

  20. The Sensitivity of State Differential Game Vessel Traffic Model

    Directory of Open Access Journals (Sweden)

    Lisowski Józef

    2016-04-01

    Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.

  1. Comparative analyses of fungicide sensitivity and SSR marker variations indicate a low risk of developing azoxystrobin resistance in Phytophthora infestans.

    Science.gov (United States)

    Qin, Chun-Fang; He, Meng-Han; Chen, Feng-Ping; Zhu, Wen; Yang, Li-Na; Wu, E-Jiao; Guo, Zheng-Liang; Shang, Li-Ping; Zhan, Jiasui

    2016-02-08

    Knowledge of the evolution of fungicide resistance is important in securing sustainable disease management in agricultural systems. In this study, we analyzed and compared the spatial distribution of genetic variation in azoxystrobin sensitivity and SSR markers in 140 Phytophthora infestans isolates sampled from seven geographic locations in China. Sensitivity to azoxystrobin and its genetic variation in the pathogen populations was measured by the relative growth rate (RGR) at four fungicide concentrations and determination of the effective concentration for 50% inhibition (EC50). We found that all isolates in the current study were sensitive to azoxystrobin and their EC50 was similar to that detected from a European population about 20 years ago, suggesting the risk of developing azoxystrobin resistance in P. infestans populations is low. Further analyses indicate that reduced genetic variation and high fitness cost in resistant mutations are the likely causes for the low evolutionary likelihood of developing azoxystrobin resistance in the pathogen. We also found a negative correlation between azoxystrobin tolerance in P. infestans populations and the mean annual temperature of collection sites, suggesting that global warming may increase the efficiency of using the fungicide to control the late blight.

  2. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.

  3. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  4. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  5. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  6. Unmix 6.0 Model for environmental data analyses

    Science.gov (United States)

    Unmix Model is a mathematical receptor model developed by EPA scientists that provides scientific support for the development and review of the air and water quality standards, exposure research, and environmental forensics.

  7. Sensitivity analysis of a road traffic immission model at urban street level. the case of street 3.1; Analyse de sensibilite d'un logiciel d'evaluation des immissions liees au trafic automobile, niveau de la rue. Le cas de STREET 3.1

    Energy Technology Data Exchange (ETDEWEB)

    Petit, Ch.; Jerphanion, M. de; Brou, M. [Targeting, 78 - Versailles (France)

    2000-07-01

    STREET is an immission model at urban street level. This software has been developed by TV Umwelt GmbH from Germany (Baden-Wuerttemberg). Based on the fluid mechanics 3-D Eulerian diffusion model MISCAM which does not take into account neither chemistry reaction, nor thermodynamic effects. The French version, STREET 3.1 integrates IMPACT, the ADEME software designed for vehicle emission calculations. IMPACT combines French car fleet data from 1995 to 2020 (INRETS Works) with COPERT II emission factors. The aim of this article is to show how the model reacts to parameters changes such as wind speed and direction, reference year, amount of passenger cars, traffic speech congestion rate. We also compared immissions due to travel by bus or by passenger car for a defined amount of travellers. With these different simulations, we illustrate the sensitivity of STREET 3.1. This model appears to enlarge the existing range of tools helping decision makers. It is also of value for communication purposes in order to enhance public awareness of the direct link between air pollution and traffic by using nearby examples and so preparing changes of behaviour towards car use in cities. (authors)

  8. Reduction of Large Detailed Chemical Kinetic Mechanisms for Autoignition Using Joint Analyses of Reaction Rates and Sensitivities

    Energy Technology Data Exchange (ETDEWEB)

    Saylam, A; Ribaucour, M; Pitz, W J; Minetti, R

    2006-11-29

    A new technique of reduction of detailed mechanisms for autoignition, which is based on two analysis methods is described. An analysis of reaction rates is coupled to an analysis of reaction sensitivity for the detection of redundant reactions. Thresholds associated with the two analyses have a great influence on the size and efficiency of the reduced mechanism. Rules of selection of the thresholds are defined. The reduction technique has been successfully applied to detailed autoignition mechanisms of two reference hydrocarbons: n-heptane and iso-octane. The efficiency of the technique and the ability of the reduced mechanisms to reproduce well the results generated by the full mechanism are discussed. A speedup of calculations by a factor of 5.9 for n-heptane mechanism and by a factor of 16.7 for iso-octane mechanism is obtained without losing accuracy of the prediction of autoignition delay times and concentrations of intermediate species.

  9. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame for such a ......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic...

  10. Analyses of Tsunami Events using Simple Propagation Models

    Science.gov (United States)

    Chilvery, Ashwith Kumar; Tan, Arjun; Aggarwal, Mohan

    2012-03-01

    Tsunamis exhibit the characteristics of ``canal waves'' or ``gravity waves'' which belong to the class of ``long ocean waves on shallow water.'' The memorable tsunami events including the 2004 Indian Ocean tsunami and the 2011 Pacific Ocean tsunami off the coast of Japan are analyzed by constructing simple tsunami propagation models including the following: (1) One-dimensional propagation model; (2) Two-dimensional propagation model on flat surface; (3) Two-dimensional propagation model on spherical surface; and (4) A finite line-source model on two-dimensional surface. It is shown that Model 1 explains the basic features of the tsunami including the propagation speed, depth of the ocean, dispersion-less propagation and bending of tsunamis around obstacles. Models 2 and 3 explain the observed amplitude variations for long-distance tsunami propagation across the Pacific Ocean, including the effect of the equatorial ocean current on the arrival times. Model 3 further explains the enhancement effect on the amplitude due to the curvature of the Earth past the equatorial distance. Finally, Model 4 explains the devastating effect of superposition of tsunamis from two subduction event, which struck the Phuket region during the 2004 Indian Ocean tsunami.

  11. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  12. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The pur

  13. Hyperelastic Modelling and Finite Element Analysing of Rubber Bushing

    Directory of Open Access Journals (Sweden)

    Merve Yavuz ERKEK

    2015-03-01

    Full Text Available The objective of this paper is to obtain stiffness curves of rubber bushings which are used in automotive industry with hyperelastic finite element model. Hyperelastic material models were obtained with different material tests. Stress and strain values and static stiffness curves were determined. It is shown that, static stiffness curves are nonlinear. The level of stiffness affects the vehicle dynamics behaviour.

  14. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    Science.gov (United States)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  15. Size-specific sensitivity: Applying a new structured population model

    Energy Technology Data Exchange (ETDEWEB)

    Easterling, M.R.; Ellner, S.P.; Dixon, P.M.

    2000-03-01

    Matrix population models require the population to be divided into discrete stage classes. In many cases, especially when classes are defined by a continuous variable, such as length or mass, there are no natural breakpoints, and the division is artificial. The authors introduce the integral projection model, which eliminates the need for division into discrete classes, without requiring any additional biological assumptions. Like a traditional matrix model, the integral projection model provides estimates of the asymptotic growth rate, stable size distribution, reproductive values, and sensitivities of the growth rate to changes in vital rates. However, where the matrix model represents the size distributions, reproductive value, and sensitivities as step functions (constant within a stage class), the integral projection model yields smooth curves for each of these as a function of individual size. The authors describe a method for fitting the model to data, and they apply this method to data on an endangered plant species, northern monkshood (Aconitum noveboracense), with individuals classified by stem diameter. The matrix and integral models yield similar estimates of the asymptotic growth rate, but the reproductive values and sensitivities in the matrix model are sensitive to the choice of stage classes. The integral projection model avoids this problem and yields size-specific sensitivities that are not affected by stage duration. These general properties of the integral projection model will make it advantageous for other populations where there is no natural division of individuals into stage classes.

  16. Modelling theoretical uncertainties in phenomenological analyses for particle physics

    CERN Document Server

    Charles, Jérôme; Niess, Valentin; Silva, Luiz Vale

    2016-01-01

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding $p$-values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive $p$-value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavour p...

  17. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  18. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 4: Uncertainty and sensitivity analyses for 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.

  19. Assessment of a geological model by surface wave analyses

    Science.gov (United States)

    Martorana, R.; Capizzi, P.; Avellone, G.; D'Alessandro, A.; Siragusa, R.; Luzio, D.

    2017-02-01

    A set of horizontal to vertical spectral ratio (HVSR) and multichannel analysis of surface waves (MASW) measurements, carried out in the Altavilla Milicia (Sicily) area, is analyzed to test a geological model of the area. Statistical techniques have been used in different stages of the data analysis, to optimize the reliability of the information extracted from geophysical measurements. In particular, cluster analysis algorithms have been implemented to select the time windows of the microseismic signal to be used for calculating the spectral ratio H/V and to identify sets of spectral ratio peaks likely caused by the same underground structures. Using results of reflection seismic lines, typical values of P-wave and S-wave velocity were estimated for each geological formation present in the area. These were used to narrow down the research space of parameters for the HVSR interpretation. MASW profiles have been carried out close to each HVSR measuring point, provided the parameters of the shallower layers for the HVSR models. MASW inversion has been constrained by extrapolating thicknesses from a known stratigraphic sequence. Preliminary 1D seismic models were obtained by adding deeper layers to models that resulted from MASW inversion. These justify the peaks of the HVSR curves due to layers deeper than MASW investigation depth. Furthermore, much deeper layers were included in the HVSR model, as suggested by geological setting and stratigraphic sequence. This choice was made considering that these latter layers do not generate other HVSR peaks and do not significantly affect the misfit. The starting models have been used to limit the starting research space for a more accurate interpretation, made considering the noise as a superposition of Rayleigh and Love waves. Results allowed to recognize four main seismic layers and to associate them to the main stratigraphic successions. The lateral correlation of seismic velocity models, joined with tectonic evidences

  20. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  1. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  2. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  3. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  4. Sex and smoking sensitive model of radon induced lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhukovsky, M.; Yarmoshenko, I. [Institute of Industrial Ecology of Ural Branch of Russian Academy of Sciences, Yekaterinburg (Russian Federation)

    2006-07-01

    Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct

  5. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  6. Analysing the Organizational Culture of Universities: Two Models

    Science.gov (United States)

    Folch, Marina Tomas; Ion, Georgeta

    2009-01-01

    This article presents the findings of two research projects, examining organizational culture by means of two different models of analysis--one at university level and one at department level--which were carried out over the last four years at Catalonian public universities (Spain). Theoretical and methodological approaches for the two…

  7. Enhancing Technology-Mediated Communication: Tools, Analyses, and Predictive Models

    Science.gov (United States)

    2007-09-01

    the home (see, for example, Nagel, Hudson, & Abowd, 2004), in social Chapter 2: Background 17 settings (see Kern, Antifakos, Schiele ...on Computer Supported Cooperative Work (CSCW 2006), pp. 525-528 ACM Press. Kern, N., Antifakos, S., Schiele , B., & Schwaninger, A. (2004). A model

  8. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    Cai-Ping Feng; John Mundy

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions,TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discussed.

  9. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  10. The Anxiety Sensitivity Index--Revised: Confirmatory Factor Analyses, Structural Invariance in Caucasian and African American Samples, and Score Reliability and Validity

    Science.gov (United States)

    Arnau, Randolph C.; Broman-Fulks, Joshua J.; Green, Bradley A.; Berman, Mitchell E.

    2009-01-01

    The most commonly used measure of anxiety sensitivity is the 36-item Anxiety Sensitivity Index--Revised (ASI-R). Exploratory factor analyses have produced several different factors structures for the ASI-R, but an acceptable fit using confirmatory factor analytic approaches has only been found for a 21-item version of the instrument. We evaluated…

  11. The Anxiety Sensitivity Index--Revised: Confirmatory Factor Analyses, Structural Invariance in Caucasian and African American Samples, and Score Reliability and Validity

    Science.gov (United States)

    Arnau, Randolph C.; Broman-Fulks, Joshua J.; Green, Bradley A.; Berman, Mitchell E.

    2009-01-01

    The most commonly used measure of anxiety sensitivity is the 36-item Anxiety Sensitivity Index--Revised (ASI-R). Exploratory factor analyses have produced several different factors structures for the ASI-R, but an acceptable fit using confirmatory factor analytic approaches has only been found for a 21-item version of the instrument. We evaluated…

  12. A new model for analysing thermal stress in granular composite

    Institute of Scientific and Technical Information of China (English)

    郑茂盛; 金志浩; 浩宏奇

    1995-01-01

    A double embedding model of inletting reinforcement grain and hollow matrix ball into the effective media of the particulate-reinforced composite is advanced. And with this model the distributions of thermal stress in different phases of the composite during cooling are studied. Various expressions for predicting elastic and elastoplastic thermal stresses are derived. It is found that the reinforcement suffers compressive hydrostatic stress and the hydrostatic stress in matrix zone is a tensile one when temperature decreases; when temperature further decreases, yield area in matrix forms; when the volume fraction of reinforcement is enlarged, compressive stress on grain and tensile hydrostatic stress in matrix zone decrease; the initial temperature difference of the interface of reinforcement and matrix yielding rises, while that for the matrix yielding overall decreases.

  13. Analysing an Analytical Solution Model for Simultaneous Mobility

    Directory of Open Access Journals (Sweden)

    Md. Ibrahim Chowdhury

    2013-12-01

    Full Text Available Current mobility models for simultaneous mobility h ave their convolution in designing simultaneous movement where mobile nodes (MNs travel randomly f rom the two adjacent cells at the same time and also have their complexity in the measurement of th e occurrences of simultaneous handover. Simultaneou s mobility problem incurs when two of the MNs start h andover approximately at the same time. As Simultaneous mobility is different for the other mo bility pattern, generally occurs less number of tim es in real time; we analyze that a simplified simultaneou s mobility model can be considered by taking only symmetric positions of MNs with random steps. In ad dition to that, we simulated the model using mSCTP and compare the simulation results in different sce narios with customized cell ranges. The analytical results shows that with the bigger the cell sizes, simultaneous handover with random steps occurrences become lees and for the sequential mobility (where initial positions of MNs is predetermined with ran dom steps, simultaneous handover is more frequent.

  14. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  15. Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite

    Science.gov (United States)

    Wooden, Diane

    2012-01-01

    Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8

  16. Temporal variations analyses and predictive modeling of microbiological seawater quality.

    Science.gov (United States)

    Lušić, Darija Vukić; Kranjčević, Lado; Maćešić, Senka; Lušić, Dražen; Jozić, Slaven; Linšak, Željko; Bilajac, Lovorka; Grbčić, Luka; Bilajac, Neiro

    2017-08-01

    Bathing water quality is a major public health issue, especially for tourism-oriented regions. Currently used methods within EU allow at least a 2.2 day period for obtaining the analytical results, making outdated the information forwarded to the public. Obtained results and beach assessment are influenced by the temporal and spatial characteristics of sample collection, and numerous environmental parameters, as well as by differences of official water standards. This paper examines the temporal variation of microbiological parameters during the day, as well as the influence of the sampling hour, on decision processes in the management of the beach. Apart from the fecal indicators stipulated by the EU Bathing Water Directive (E. coli and enterococci), additional fecal (C. perfringens) and non-fecal (S. aureus and P. aeriginosa) parameters were analyzed. Moreover, the effects of applying different evaluation criteria (national, EU and U.S. EPA) to beach ranking were studied, and the most common reasons for exceeding water-quality standards were investigated. In order to upgrade routine monitoring, a predictive statistical model was developed. The highest concentrations of fecal indicators were recorded early in the morning (6 AM) due to the lack of solar radiation during the night period. When compared to enterococci, E. coli criteria appears to be more stringent for the detection of fecal pollution. In comparison to EU and U.S. EPA criteria, Croatian national evaluation criteria provide stricter public health standards. Solar radiation and precipitation were the predominant environmental parameters affecting beach water quality, and these parameters were included in the predictive model setup. Predictive models revealed great potential for the monitoring of recreational water bodies, and with further development can become a useful tool for the improvement of public health protection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Kinetic modeling and sensitivity analysis of plasma-assisted combustion

    Science.gov (United States)

    Togai, Kuninori

    Plasma-assisted combustion (PAC) is a promising combustion enhancement technique that shows great potential for applications to a number of different practical combustion systems. In this dissertation, the chemical kinetics associated with PAC are investigated numerically with a newly developed model that describes the chemical processes induced by plasma. To support the model development, experiments were performed using a plasma flow reactor in which the fuel oxidation proceeds with the aid of plasma discharges below and above the self-ignition thermal limit of the reactive mixtures. The mixtures used were heavily diluted with Ar in order to study the reactions with temperature-controlled environments by suppressing the temperature changes due to chemical reactions. The temperature of the reactor was varied from 420 K to 1250 K and the pressure was fixed at 1 atm. Simulations were performed for the conditions corresponding to the experiments and the results are compared against each other. Important reaction paths were identified through path flux and sensitivity analyses. Reaction systems studied in this work are oxidation of hydrogen, ethylene, and methane, as well as the kinetics of NOx in plasma. In the fuel oxidation studies, reaction schemes that control the fuel oxidation are analyzed and discussed. With all the fuels studied, the oxidation reactions were extended to lower temperatures with plasma discharges compared to the cases without plasma. The analyses showed that radicals produced by dissociation of the reactants in plasma plays an important role of initiating the reaction sequence. At low temperatures where the system exhibits a chain-terminating nature, reactions of HO2 were found to play important roles on overall fuel oxidation. The effectiveness of HO2 as a chain terminator was weakened in the ethylene oxidation system, because the reactions of C 2H4 + O that have low activation energies deflects the flux of O atoms away from HO2. For the

  18. Analysing the Competency of Mathematical Modelling in Physics

    CERN Document Server

    Redish, Edward F

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unsolved. A fundamental issue is that in physics, we don't just use maths, we think about the physical world with it. As a result, we make meaning with math-ematical symbology in a different way than mathematicians do. In this talk we analyze how developing the competency of mathematical modeling is more than just "learning to do math" but requires learning to blend physical meaning into mathematical representations and use that physical meaning in solving problems. Examples are drawn from across the curriculum.

  19. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  20. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  1. Application of simplified model to sensitivity analysis of solidification process

    Directory of Open Access Journals (Sweden)

    R. Szopa

    2007-12-01

    Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.

  2. Geographically Isolated Wetlands and Catchment Hydrology: A Modified Model Analyses

    Science.gov (United States)

    Evenson, G.; Golden, H. E.; Lane, C.; D'Amico, E.

    2014-12-01

    Geographically isolated wetlands (GIWs), typically defined as depressional wetlands surrounded by uplands, support an array of hydrological and ecological processes. However, key research questions concerning the hydrological connectivity of GIWs and their impacts on downgradient surface waters remain unanswered. This is particularly important for regulation and management of these systems. For example, in the past decade United States Supreme Court decisions suggest that GIWs can be afforded protection if significant connectivity exists between these waters and traditional navigable waters. Here we developed a simulation procedure to quantify the effects of various spatial distributions of GIWs across the landscape on the downgradient hydrograph using a refined version of the Soil and Water Assessment Tool (SWAT), a catchment-scale hydrological simulation model. We modified the SWAT FORTRAN source code and employed an alternative hydrologic response unit (HRU) definition to facilitate an improved representation of GIW hydrologic processes and connectivity relationships to other surface waters, and to quantify their downgradient hydrological effects. We applied the modified SWAT model to an ~ 202 km2 catchment in the Coastal Plain of North Carolina, USA, exhibiting a substantial population of mapped GIWs. Results from our series of GIW distribution scenarios suggest that: (1) Our representation of GIWs within SWAT conforms to field-based characterizations of regional GIWs in most respects; (2) GIWs exhibit substantial seasonally-dependent effects upon downgradient base flow; (3) GIWs mitigate peak flows, particularly following high rainfall events; and (4) The presence of GIWs on the landscape impacts the catchment water balance (e.g., by increasing groundwater outflows). Our outcomes support the hypothesis that GIWs have an important catchment-scale effect on downgradient streamflow.

  3. Sensitivity of a Shallow-Water Model to Parameters

    CERN Document Server

    Kazantsev, Eugene

    2011-01-01

    An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...

  4. Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis

    Science.gov (United States)

    2006-01-01

    Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis by Mostafiz R. Chowdhury and Ala Tabiei ARL-TR-3703...Adelphi, MD 20783-1145 ARL-TR-3703 January 2006 Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis...GRANT NUMBER 4. TITLE AND SUBTITLE Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis 5c. PROGRAM

  5. Sensitivity Analyses in Small Break LOCA with HPI-Failure: Effect of Break-Size in Secondary-Side Depressurization

    Science.gov (United States)

    Kinoshita, Ikuo; Torige, Toshihide; Yamada, Minoru

    2014-06-01

    In the case of total failure of the high pressure injection (HPI) system following small break loss of coolant accident (SBLOCA) in pressurized water reactor (PWR), the break size is so small that the primary system does not depressurize to the accumulator (ACC) injection pressure before the core is uncovered extensively. Therefore, steam generator (SG) secondary-side depressurization is necessary as an accident management in order to grant accumulator system actuation and core reflood. A thermal-hydraulic analysis using RELAP5/MOD3 was made on SBLOCA with HPI-failure for Oi Units 3/4 operated by Kansai Electoric Power Co., which are conventional 4 loop PWR plants. The effectiveness of SG secondary-side depressurization procedure was investigated for the real plant design and operational characteristics. The sensitivity analyses using RELAP5/MOD3.2 showed that the accident management was effective for a wide range of break sizes, various orientations and positions. The critical break can be 3 inch cold-leg bottom break.

  6. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  7. Analyses of single nucleotide polymorphisms in selected nutrient-sensitive genes in weight-regain prevention: the DIOGENES study.

    Science.gov (United States)

    Larsen, Lesli H; Angquist, Lars; Vimaleswaran, Karani S; Hager, Jörg; Viguerie, Nathalie; Loos, Ruth J F; Handjieva-Darlenska, Teodora; Jebb, Susan A; Kunesova, Marie; Larsen, Thomas M; Martinez, J Alfredo; Papadaki, Angeliki; Pfeiffer, Andreas F H; van Baak, Marleen A; Sørensen, Thorkild Ia; Holst, Claus; Langin, Dominique; Astrup, Arne; Saris, Wim H M

    2012-05-01

    Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes. This study examined single nucleotide polymorphisms (SNPs) in presumed nutrient-sensitive candidate genes for obesity and obesity-related diseases for main and dietary interaction effects on weight, waist circumference, and fat mass regain over 6 mo. In total, 742 participants who had lost ≥ 8% of their initial body weight were randomly assigned to follow 1 of 5 different ad libitum diets with different glycemic indexes and contents of dietary protein. The SNP main and SNP-diet interaction effects were analyzed by using linear regression models, corrected for multiple testing by using Bonferroni correction and evaluated by using quantile-quantile (Q-Q) plots. After correction for multiple testing, none of the SNPs were significantly associated with weight, waist circumference, or fat mass regain. Q-Q plots showed that ALOX5AP rs4769873 showed a higher observed than predicted P value for the association with less waist circumference regain over 6 mo (-3.1 cm/allele; 95% CI: -4.6, -1.6; P/Bonferroni-corrected P = 0.000039/0.076), independently of diet. Additional associations were identified by using Q-Q plots for SNPs in ALOX5AP, TNF, and KCNJ11 for main effects; in LPL and TUB for glycemic index interaction effects on waist circumference regain; in GHRL, CCK, MLXIPL, and LEPR on weight; in PPARC1A, PCK2, ALOX5AP, PYY, and ADRB3 on waist circumference; and in PPARD, FABP1, PLAUR, and LPIN1 on fat mass regain for dietary protein interaction. The observed effects of SNP-diet interactions on weight, waist, and fat mass regain suggest that genetic variation in nutrient-sensitive genes can modify the response to diet. This trial was registered at clinicaltrials.gov as NCT00390637.

  8. Illustrating sensitivity in environmental fate models using partitioning maps - application to selected contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, T.; Wania, F. [Univ. of Toronto at Scarborough - DPES, Toronto (Canada)

    2004-09-15

    Generic environmental multimedia fate models are important tools in the assessment of the impact of organic pollutants. Because of limited possibilities to evaluate generic models by comparison with measured data and the increasing regulatory use of such models, uncertainties of model input and output are of considerable concern. This led to a demand for sensitivity and uncertainty analyses for the outputs of environmental fate models. Usually, variations of model predictions of the environmental fate of organic contaminants are analyzed for only one or at most a few selected chemicals, even though parameter sensitivity and contribution to uncertainty are widely different for different chemicals. We recently presented a graphical method that allows for the comprehensive investigation of model sensitivity and uncertainty for all neutral organic chemicals simultaneously. This is achieved by defining a two-dimensional hypothetical ''chemical space'' as a function of the equilibrium partition coefficients between air, water, and octanol (K{sub OW}, K{sub AW}, K{sub OA}), and plotting sensitivity and/or uncertainty of a specific model result to each input parameter as function of this chemical space. Here we show how such sensitivity maps can be used to quickly identify the variables with the highest influence on the environmental fate of selected, chlorobenzenes, polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), hexachlorocyclohexanes (HCHs) and brominated flame retardents (BFRs).

  9. Time-dependent global sensitivity analysis with active subspaces for a lithium ion battery model

    CERN Document Server

    Constantine, Paul G

    2016-01-01

    Renewable energy researchers use computer simulation to aid the design of lithium ion storage devices. The underlying models contain several physical input parameters that affect model predictions. Effective design and analysis must understand the sensitivity of model predictions to changes in model parameters, but global sensitivity analyses become increasingly challenging as the number of input parameters increases. Active subspaces are part of an emerging set of tools to reveal and exploit low-dimensional structures in the map from high-dimensional inputs to model outputs. We extend a linear model-based heuristic for active subspace discovery to time-dependent processes and apply the resulting technique to a lithium ion battery model. The results reveal low-dimensional structure that a designer may exploit to efficiently study the relationship between parameters and predictions.

  10. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  11. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  12. Detecting tipping points in ecological models with sensitivity analysis

    NARCIS (Netherlands)

    Broeke, G.A. ten; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, J.

    2016-01-01

    Simulation models are commonly used to understand and predict the developmentof ecological systems, for instance to study the occurrence of tipping points and their possibleecological effects. Sensitivity analysis is a key tool in the study of model responses to change s in conditions. The applicabi

  13. Detecting Tipping points in Ecological Models with Sensitivity Analysis

    NARCIS (Netherlands)

    Broeke, ten G.A.; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, Jaap

    2016-01-01

    Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possible ecological effects. Sensitivity analysis is a key tool in the study of model responses to changes in conditions. The appli

  14. Sensitivity analysis of a sound absorption model with correlated inputs

    Science.gov (United States)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  15. Sensitivity of Multiangle, Multispectral Polarimetric Remote Sensing Over Open Oceans to Water-Leaving Radiance: Analyses of RSP Data Acquired During the MILAGRO Campaign

    Science.gov (United States)

    Chowdhary, Jacek; Cairns, Brian; Waquet, Fabien; Knobelspiesse, Kirk; Ottaviani, Matteo; Redemann, Jens; Travis, Larry; Mishchenko, Michael

    2012-01-01

    For remote sensing of aerosol over the ocean, there is a contribution from light scattered underwater. The brightness and spectrum of this light depends on the biomass content of the ocean, such that variations in the color of the ocean can be observed even from space. Rayleigh scattering by pure sea water, and Rayleigh-Gans type scattering by plankton, causes this light to be polarized with a distinctive angular distribution. To study the contribution of this underwater light polarization to multiangle, multispectral observations of polarized reflectance over ocean, we previously developed a hydrosol model for use in underwater light scattering computations that produces realistic variations of the ocean color and the underwater light polarization signature of pure sea water. In this work we review this hydrosol model, include a correction for the spectrum of the particulate scattering coefficient and backscattering efficiency, and discuss its sensitivity to variations in colored dissolved organic matter (CDOM) and in the scattering function of marine particulates. We then apply this model to measurements of total and polarized reflectance that were acquired over open ocean during the MILAGRO field campaign by the airborne Research Scanning Polarimeter (RSP). Analyses show that our hydrosol model faithfully reproduces the water-leaving contributions to RSP reflectance, and that the sensitivity of these contributions to Chlorophyll a concentration [Chl] in the ocean varies with the azimuth, height, and wavelength of observations. We also show that the impact of variations in CDOM on the polarized reflectance observed by the RSP at low altitude is comparable to or much less than the standard error of this reflectance whereas their effects in total reflectance may be substantial (i.e. up to >30%). Finally, we extend our study of polarized reflectance variations with [Chl] and CDOM to include results for simulated spaceborne observations.

  16. Phenotypic and Genetic Analyses of the Varroa Sensitive Hygienic Trait in Russian Honey Bee (Hymenoptera: Apidae) Colonies

    Science.gov (United States)

    Kirrane, Maria J.; de Guzman, Lilia I.; Holloway, Beth; Frake, Amanda M.; Rinderer, Thomas E.; Whelan, Pádraig M.

    2015-01-01

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an “actual brood removal assay” that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock. PMID:25909856

  17. Hydrophilic property of 316L stainless steel after treatment by atmospheric pressure corona streamer plasma using surface-sensitive analyses

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hamarneh, Ibrahim, E-mail: hamarnehibrahim@yahoo.com [Department of Physics, Faculty of Science, Al-Balqa Applied University, Salt 19117 (Jordan); Pedrow, Patrick [School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA 99164 (United States); Eskhan, Asma; Abu-Lail, Nehal [Gene and Linda Voiland School of Chemical Engineering and Bioengineering, Washington State University, Pullman, WA 99164 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Surface hydrophilic property of surgical-grade 316L stainless steel was enhanced by Ar-O{sub 2} corona streamer plasma treatment. Black-Right-Pointing-Pointer Hydrophilicity, surface morphology, roughness, and chemical composition before and after plasma treatment were evaluated. Black-Right-Pointing-Pointer Contact angle measurements and surface-sensitive analyses techniques, including XPS and AFM, were carried out. Black-Right-Pointing-Pointer Optimum plasma treatment conditions of the SS 316L surface were determined. - Abstract: Surgical-grade 316L stainless steel (SS 316L) had its surface hydrophilic property enhanced by processing in a corona streamer plasma reactor using O{sub 2} gas mixed with Ar at atmospheric pressure. Reactor excitation was 60 Hz ac high-voltage (0-10 kV{sub RMS}) applied to a multi-needle-to-grounded screen electrode configuration. The treated surface was characterized with a contact angle tester. Surface free energy (SFE) for the treated stainless steel increased measurably compared to the untreated surface. The Ar-O{sub 2} plasma was more effective in enhancing the SFE than Ar-only plasma. Optimum conditions for the plasma treatment system used in this study were obtained. X-ray photoelectron spectroscopy (XPS) characterization of the chemical composition of the treated surfaces confirms the existence of new oxygen-containing functional groups contributing to the change in the hydrophilic nature of the surface. These new functional groups were generated by surface reactions caused by reactive oxidation of substrate species. Atomic force microscopy (AFM) images were generated to investigate morphological and roughness changes on the plasma treated surfaces. The aging effect in air after treatment was also studied.

  18. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (hymenoptera: apidae colonies.

    Directory of Open Access Journals (Sweden)

    Maria J Kirrane

    Full Text Available Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH, provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25. Only two (percentages of brood removed and reproductive foundress Varroa out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock.

  19. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  20. Development of microbial-enzyme-mediated decomposition model parameters through steady-state and dynamic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL

    2013-01-01

    We developed a Microbial-ENzyme-mediated Decomposition (MEND) model, based on the Michaelis-Menten kinetics, that describes the dynamics of physically defined pools of soil organic matter (SOC). These include particulate, mineral-associated, dissolved organic matter (POC, MOC, and DOC, respectively), microbial biomass, and associated exoenzymes. The ranges and/or distributions of parameters were determined by both analytical steady-state and dynamic analyses with SOC data from the literature. We used an improved multi-objective parameter sensitivity analysis (MOPSA) to identify the most important parameters for the full model: maintenance of microbial biomass, turnover and synthesis of enzymes, and carbon use efficiency (CUE). The model predicted an increase of 2 C (baseline temperature =12 C) caused the pools of POC-Cellulose, MOC, and total SOC to increase with dynamic CUE and decrease with constant CUE, as indicated by the 50% confidence intervals. Regardless of dynamic or constant CUE, the pool sizes of POC, MOC, and total SOC varied from 8% to 8% under +2 C. The scenario analysis using a single parameter set indicates that higher temperature with dynamic CUE might result in greater net increases in both POC-Cellulose and MOC pools. Different dynamics of various SOC pools reflected the catalytic functions of specific enzymes targeting specific substrates and the interactions between microbes, enzymes, and SOC. With the feasible parameter values estimated in this study, models incorporating fundamental principles of microbial-enzyme dynamics can lead to simulation results qualitatively different from traditional models with fast/slow/passive pools.

  1. Pan-European modelling of riverine nutrient concentrations - spatial patterns, source detection, trend analyses, scenario modelling

    Science.gov (United States)

    Bartosova, Alena; Arheimer, Berit; Capell, Rene; Donnelly, Chantal; Strömqvist, Johan

    2016-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrogen, phosphorus) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. from major river basin to continental scales can fill a gap where monitoring data is not available. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, which is based on open data sources. We investigate the ability of the E-HYPE model to replicate the spatial and temporal variations found in observed time-series of riverine N and P concentrations, and illustrate the model usefulness for nutrient source detection, trend analyses, and scenario modelling. The results show spatial patterns in N concentration in rivers across Europe which can be used to further our understanding of nutrient issues across the European continent. E-HYPE results show hot spots with highest concentrations of total nitrogen in Western Europe along the North Sea coast. Source apportionment was performed to rank sources of nutrient inflow from land to sea along the European coast. An integrated dynamic model as E-HYPE also allows us to investigate impacts of climate change and measure programs, which was illustrated in a couple of scenarios for the Baltic Sea. Comparing model results with observations shows large uncertainty in many of the data sets and the assumptions used in the model set-up, e.g. point source release estimates. However, evaluation of model performance at a number of measurement sites in Europe shows that mean N concentration levels are generally well simulated. P levels are less well predicted which is expected as the variability of P concentrations in both time and space is higher. Comparing model performance with model set-ups using local data for the Weaver River (UK) did not result in systematically better model performance which highlights the complexity of model

  2. Identifying Spatially Variable Sensitivity of Model Predictions and Calibrations

    Science.gov (United States)

    McKenna, S. A.; Hart, D. B.

    2005-12-01

    Stochastic inverse modeling provides an ensemble of stochastic property fields, each calibrated to measured steady-state and transient head data. These calibrated fields are used as input for predictions of other processes (e.g., contaminant transport, advective travel time). Use of the entire ensemble of fields transfers spatial uncertainty in hydraulic properties to uncertainty in the predicted performance measures. A sampling-based sensitivity coefficient is proposed to determine the sensitivity of the performance measures to the uncertain values of hydraulic properties at every cell in the model domain. The basis of this sensitivity coefficient is the Spearman rank correlation coefficient. Sampling-based sensitivity coefficients are demonstrated using a recent set of transmissivity (T) fields created through a stochastic inverse calibration process for the Culebra dolomite in the vicinity of the WIPP site in southeastern New Mexico. The stochastic inverse models were created using a unique approach to condition a geologically-based conceptual model of T to measured T values via a multiGaussian residual field. This field is calibrated to both steady-state and transient head data collected over an 11 year period. Maps of these sensitivity coefficients provide a means of identifying the locations in the study area to which both the value of the model calibration objective function and the predicted travel times to a regulatory boundary are most sensitive to the T and head values. These locations can be targeted for deployment of additional long-term monitoring resources. Comparison of areas where the calibration objective function and the travel time have high sensitivity shows that these are not necessarily coincident with regions of high uncertainty. The sampling-based sensitivity coefficients are compared to analytically derived sensitivity coefficients at the 99 pilot point locations. Results of the sensitivity mapping exercise are being used in combination

  3. Exploring sensitivity of a multistate occupancy model to inform management decisions

    Science.gov (United States)

    Green, A.W.; Bailey, L.L.; Nichols, J.D.

    2011-01-01

    Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate

  4. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  5. Sensitivity-based research prioritization through stochastic characterization modeling

    DEFF Research Database (Denmark)

    Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter

    2017-01-01

    Product developers using life cycle toxicity characterization models to understand the potential impacts of chemical emissions face serious challenges related to large data demands and high input data uncertainty. This motivates greater focus on model sensitivity toward input parameter variability...... to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...

  6. Sensitivity analysis of the fission gas behavior model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  7. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    Science.gov (United States)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  8. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  9. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  10. Shape sensitivity analysis in numerical modelling of solidification

    Directory of Open Access Journals (Sweden)

    E. Majchrzak

    2007-12-01

    Full Text Available The methods of sensitivity analysis constitute a very effective tool on the stage of numerical modelling of casting solidification. It is possible, among others, to rebuilt the basic numerical solution on the solution concerning the others disturbed values of physical and geometrical parameters of the process. In this paper the problem of shape sensitivity analysis is discussed. The non-homogeneous casting-mould domain is considered and the perturbation of the solidification process due to the changes of geometrical dimensions is analyzed. From the mathematical point of view the sensitivity model is rather complex but its solution gives the interesting information concerning the mutual connections between the kinetics of casting solidification and its basic dimensions. In the final part of the paper the example of computations is shown. On the stage of numerical realization the finite difference method has been applied.

  11. Parameter identification and global sensitivity analysis of Xinanjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng SONG

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  12. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    Science.gov (United States)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  13. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  14. A Culture-Sensitive Agent in Kirman's Ant Model

    Science.gov (United States)

    Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu

    The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.

  15. A model for perception-based identification of sensitive skin

    NARCIS (Netherlands)

    Richters, R.J.H.; Uzunbajakava, N.E.; Hendriks, J.C.; Bikker, J.W.; Erp, P.E.J. van; Kerkhof, P.C.M. van de

    2017-01-01

    BACKGROUND: With high prevalence of sensitive skin (SS), lack of strong evidence on pathomechanisms, consensus on associated symptoms, proof of existence of 'general' SS and tools to recruit subjects, this topic attracts increasing attention of research. OBJECTIVE: To create a model for selecting

  16. Culturally Sensitive Dementia Caregiving Models and Clinical Practice

    Science.gov (United States)

    Daire, Andrew P.; Mitcham-Smith, Michelle

    2006-01-01

    Family caregiving for individuals with dementia is an increasingly complex issue that affects the caregivers' and care recipients' physical, mental, and emotional health. This article presents 3 key culturally sensitive caregiver models along with clinical interventions relevant for mental health counseling professionals.

  17. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus......, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging....

  18. A global sensitivity analysis of the PlumeRise model of volcanic plumes

    Science.gov (United States)

    Woodhouse, Mark J.; Hogg, Andrew J.; Phillips, Jeremy C.

    2016-10-01

    Integral models of volcanic plumes allow predictions of plume dynamics to be made and the rapid estimation of volcanic source conditions from observations of the plume height by model inversion. Here we introduce PlumeRise, an integral model of volcanic plumes that incorporates a description of the state of the atmosphere, includes the effects of wind and the phase change of water, and has been developed as a freely available web-based tool. The model can be used to estimate the height of a volcanic plume when the source conditions are specified, or to infer the strength of the source from an observed plume height through a model inversion. The predictions of the volcanic plume dynamics produced by the model are analysed in four case studies in which the atmospheric conditions and the strength of the source are varied. A global sensitivity analysis of the model to a selection of model inputs is performed and the results are analysed using parallel coordinate plots for visualisation and variance-based sensitivity indices to quantify the sensitivity of model outputs. We find that if the atmospheric conditions do not vary widely then there is a small set of model inputs that strongly influence the model predictions. When estimating the height of the plume, the source mass flux has a controlling influence on the model prediction, while variations in the plume height strongly effect the inferred value of the source mass flux when performing inversion studies. The values taken for the entrainment coefficients have a particularly important effect on the quantitative predictions. The dependencies of the model outputs to variations in the inputs are discussed and compared to simple algebraic expressions that relate source conditions to the height of the plume.

  19. Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model.

    Directory of Open Access Journals (Sweden)

    Daniel Robertson

    Full Text Available Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model's parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies.

  20. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...

  1. A Conceptual Model for Water Sensitive City in Surabaya

    Science.gov (United States)

    Pamungkas, A.; Tucunan, K. P.; Navastara, A.; Idajati, H.; Pratomoatmojo, N. A.

    2017-08-01

    Frequent inundated areas, low quality of water supply, highly dependent water sources from external are some key problems in Surabaya water balance. Many aspects of urban development have stimulated those problems. To uncover the complexity of water balance in Surabaya, a conceptual model for water sensitive city is constructed to find the optimum solution. A system dynamic modeling is utilized to assist and enrich the idea of conceptual model. A secondary analysis to a wide range data directs the process in making a conceptual model. FGD involving many experts from multidiscipline are also used to finalize the conceptual model. Based on those methods, the model has four main sub models that are; flooding, land use change, water demand and water supply. The model consists of 35 key variables illustrating challenges in Surabaya urban water.

  2. A non-human primate model for gluten sensitivity.

    Directory of Open Access Journals (Sweden)

    Michael T Bethune

    Full Text Available BACKGROUND AND AIMS: Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity. METHODS: Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity. RESULTS: When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse. CONCLUSIONS: Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.

  3. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  4. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    Science.gov (United States)

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would

  5. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  6. Sensitivity experiments to mountain representations in spectral models

    Directory of Open Access Journals (Sweden)

    U. Schlese

    2000-06-01

    Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.

  7. Stochastic sensitivity of a bistable energy model for visual perception

    Science.gov (United States)

    Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.

  8. Uncertainty and Sensitivity Analyses of the Simulated SeawaterFreshwater Mixing Zones in Steady-State Coastal Aquifers

    Institute of Scientific and Technical Information of China (English)

    赵忠伟; 赵坚; 辛沛; 华国芬; 金光球

    2015-01-01

    The uncertainty and sensitivity of predicted positions and thicknesses of seawater-freshwater mixing zones with respect to uncertainties of saturated hydraulic conductivity, porosity, molecular diffusivity, longitudinal and transverse dispersivities were investigated in both head-control and flux-control inland boundary systems. It shows that uncertainties and sensitivities of predicted results vary in different boundary systems. With the same designed matrix of uncertain factors in simulation experiments, the variance of predicted positions and thickness in the flux-control system is much larger than that predicted in the head-control system. In a head-control system, the most sensitive factors for the predicted position of the mixing zone are inland freshwater head and transverse dispersivity. However, the predicted position of the mixing zone is more sensitive to saturated hydraulic conductivity in a flux-control system. In a head-control system, the most sensitive factors for the predicted thickness of the mixing zone include transverse dispersivity, molecular diffusivity, porosity, and longitudinal dispersivity, but the predicted thickness is more sensitive to the saturated hydraulic conductivity in a flux-control system. These findings improve our understandings for the development of seawater-freshwater mixing zone during seawater intrusion processes, and give technical support for groundwater resource management in coastal aquifers.

  9. Stream Tracer Integrity: Comparative Analyses of Rhodamine-WT and Sodium Chloride through Transient Storage Modeling

    Science.gov (United States)

    Smull, E. M.; Wlostowski, A. N.; Gooseff, M. N.; Bowden, W. B.; Wollheim, W. M.

    2013-12-01

    Solute transport in natural channels describes the transport of water and dissolved matter through a river reach of interest. Conservative tracers allow us to label a parcel of stream water, such that we can track its movement downstream through space and time. A transient storage model (TSM) can be fit to the breakthrough curve (BTC) following a stream tracer experiment, as a way to quantify advection, dispersion, and transient storage processes. Arctic streams and rivers, in particular, are continuously underlain by permafrost, which provides for a simplified surface water-groundwater exchange. Sodium chloride (NaCl) and Rhodamine-WT (RWT) are widely used tracers, and differences between the two in conservative behavior and detection limits have been noted in small-scale field and laboratory studies. This study seeks to further this understanding by applying the OTIS model to NaCl and RWT BTC data from a field study on the Kuparuk River, Alaska, at varying flow rates. There are two main questions to be answered: 1) Do differences in NaCl and RWT manifest in OTIS parameter values? 2) Are the OTIS model results reliable for NaCl, RWT, or both? Fieldwork was performed in the summer of 2012 on the Kuparuk River, and modeling was performed using a modified OTIS framework, which provided for parameter optimization and further global sensitivity analyses. The results of this study will contribute to the greater body of literature surrounding Arctic stream hydrology, and it will assist in methodology for future tracer field studies. Additionally, the modeling work will provide an analysis for OTIS parameter identifiability, and assess stream tracer integrity (i.e. how well the BTC data represents the system) and its relation to TSM performance (i.e. how well the TSM can find a unique fit to the BTC data). The quantitative tools used can be applied to other solute transport studies, to better understand potential deviations in model outcome due to stream tracer choice and

  10. Sensitivity analysis of a forest gap model concerning current and future climate variability

    Energy Technology Data Exchange (ETDEWEB)

    Lasch, P.; Suckow, F.; Buerger, G.; Lindner, M.

    1998-07-01

    The ability of a forest gap model to simulate the effects of climate variability and extreme events depends on the temporal resolution of the weather data that are used and the internal processing of these data for growth, regeneration and mortality. The climatological driving forces of most current gap models are based on monthly means of weather data and their standard deviations, and long-term monthly means are used for calculating yearly aggregated response functions for ecological processes. In this study, the results of sensitivity analyses using the forest gap model FORSKA{sub -}P and involving climate data of different resolutions, from long-term monthly means to daily time series, including extreme events, are presented for the current climate and for a climate change scenario. The model was applied at two sites with differing soil conditions in the federal state of Brandenburg, Germany. The sensitivity of the model concerning climate variations and different climate input resolutions is analysed and evaluated. The climate variability used for the model investigations affected the behaviour of the model substantially. (orig.)

  11. Climate Sensitivity and Solar Cycle Response in Climate Models

    Science.gov (United States)

    Liang, M.; Lin, L.; Tung, K. K.; Yung, Y. L.

    2011-12-01

    Climate sensitivity, broadly defined, is a measure of the response of the climate system to the changes of external forcings such as anthropogenic greenhouse emissions and solar radiation, including climate feedback processes. General circulation models provide a means to quantitatively incorporate various feedback processes, such as water-vapor, cloud and albedo feedbacks. Less attention is devoted so far to the role of the oceans in significantly affecting these processes and hence the modelled transient climate sensitivity. Here we show that the oceanic mixing plays an important role in modifying the multi-decadal to centennial oscillations of the sea surface temperature, which in turn affect the derived climate sensitivity at various phases of the oscillations. The eleven-year solar cycle forcing is used to calibrate the response of the climate system. The GISS-EH coupled atmosphere-ocean model was run twice in coupled mode for more than 2000 model years, each with a different value for the ocean eddy mixing parameter. In both runs, there is a prominent low-frequency oscillation with a period of 300-500 years, and depending on the phase of such an oscillation, the derived climate gain factor varies by a factor of 2. The run with the value of the eddy ocean mixing parameter that is half that used in IPCC AR4 study has the more realistic low-frequency variability in SST and in the derived response to the known solar-cycle forcing.

  12. Sensitivity Analysis in a Complex Marine Ecological Model

    Directory of Open Access Journals (Sweden)

    Marcos D. Mateus

    2015-05-01

    Full Text Available Sensitivity analysis (SA has long been recognized as part of best practices to assess if any particular model can be suitable to inform decisions, despite its uncertainties. SA is a commonly used approach for identifying important parameters that dominate model behavior. As such, SA address two elementary questions in the modeling exercise, namely, how sensitive is the model to changes in individual parameter values, and which parameters or associated processes have more influence on the results. In this paper we report on a local SA performed on a complex marine biogeochemical model that simulates oxygen, organic matter and nutrient cycles (N, P and Si in the water column, and well as the dynamics of biological groups such as producers, consumers and decomposers. SA was performed using a “one at a time” parameter perturbation method, and a color-code matrix was developed for result visualization. The outcome of this study was the identification of key parameters influencing model performance, a particularly helpful insight for the subsequent calibration exercise. Also, the color-code matrix methodology proved to be effective for a clear identification of the parameters with most impact on selected variables of the model.

  13. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    Science.gov (United States)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  14. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  15. Defining the true sensitivity of culture for the diagnosis of melioidosis using Bayesian latent class models.

    Directory of Open Access Journals (Sweden)

    Direk Limmathurotsakul

    Full Text Available BACKGROUND: Culture remains the diagnostic gold standard for many bacterial infections, and the method against which other tests are often evaluated. Specificity of culture is 100% if the pathogenic organism is not found in healthy subjects, but the sensitivity of culture is more difficult to determine and may be low. Here, we apply Bayesian latent class models (LCMs to data from patients with a single Gram-negative bacterial infection and define the true sensitivity of culture together with the impact of misclassification by culture on the reported accuracy of alternative diagnostic tests. METHODS/PRINCIPAL FINDINGS: Data from published studies describing the application of five diagnostic tests (culture and four serological tests to a patient cohort with suspected melioidosis were re-analysed using several Bayesian LCMs. Sensitivities, specificities, and positive and negative predictive values (PPVs and NPVs were calculated. Of 320 patients with suspected melioidosis, 119 (37% had culture confirmed melioidosis. Using the final model (Bayesian LCM with conditional dependence between serological tests, the sensitivity of culture was estimated to be 60.2%. Prediction accuracy of the final model was assessed using a classification tool to grade patients according to the likelihood of melioidosis, which indicated that an estimated disease prevalence of 61.6% was credible. Estimates of sensitivities, specificities, PPVs and NPVs of four serological tests were significantly different from previously published values in which culture was used as the gold standard. CONCLUSIONS/SIGNIFICANCE: Culture has low sensitivity and low NPV for the diagnosis of melioidosis and is an imperfect gold standard against which to evaluate alternative tests. Models should be used to support the evaluation of diagnostic tests with an imperfect gold standard. It is likely that the poor sensitivity/specificity of culture is not specific for melioidosis, but rather a generic

  16. A fluid dynamics multidimensional model of biofilm growth: stability, influence of environment and sensitivity.

    Science.gov (United States)

    Clarelli, F; Di Russo, C; Natalini, R; Ribot, M

    2016-12-01

    In this article, we study in detail the fluid dynamics system proposed in Clarelli et al. (2013, J. Math. Biol., 66, 1387-1408) to model the formation of cyanobacteria biofilms. After analysing the linear stability of the unique non-trivial equilibrium of the system, we introduce in the model the influence of light and temperature, which are two important factors for the development of a cyanobacteria biofilm. Since the values of the coefficients we use for our simulations are estimated through information found in the literature, some sensitivity and robustness analyses on these parameters are performed. All these elements enable us to control and to validate the model we have already derived and to present some numerical simulations in the 2D and the 3D cases.

  17. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  18. Establishment of a sensitized canine model for kidney transplantation

    Institute of Scientific and Technical Information of China (English)

    XIE Sen; XIA Sui-sheng; TANG Li-gong; CHENG Jun; CHEN Zhi-shui; ZHENG Shan-gen

    2005-01-01

    Objective:To establish a sensitized canine model for kidney transplantation. Methods:12 male dogs were averagely grouped as donors and recipients. A small number of donor canine lymphocytes was infused into different anatomic locations of a paired canine recipient for each time and which was repeated weekly. Specific immune sensitization was monitored by means of Complement Dependent Cytotoxicity (CDC) and Mixed Lymphocyte Culture (MLC) test. When CDC test conversed to be positive and MLC test showed a significant proliferation of reactive lymphocytes of canine recipients, the right kidneys of the paired dogs were excised and transplanted to each other concurrently. Injury of renal allograft function was scheduled determined by ECT dynamic kidney photography and pathologic investigation. Results :CDC test usually conversed to be positive and reactive lymphocytes of canine recipients were also observed to be proliferated significantly in MLC test after 3 to 4 times of canine donor lymphocyte infusions. Renal allograft function deterioration occurred 4 d post-operatively in 4 of 6 canine recipients, in contrast to none in control dogs. Pathologic changes suggested antibody-mediated rejection (delayed) or acute rejection in 3 excised renal allograft of sensitized dogs. Seven days after operation, all sensitized dogs had lost graft function, pathologic changes of which showed that the renal allografts were seriously rejected. 2 of 3 dogs in control group were also acutely rejected. Conclusion:A convenient method by means of repeated stimulation of canine lymphocyte may induce specific immune sensitization in canine recipients. Renal allografts in sensitized dogs will be earlier rejected and result in a more deteriorated graft function.

  19. Modelling survival: exposure pattern, species sensitivity and uncertainty.

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G

    2016-07-06

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  20. Modelling survival: exposure pattern, species sensitivity and uncertainty

    Science.gov (United States)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.

    2016-07-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.

  1. A modified Lee-Carter model for analysing short-base-period data.

    Science.gov (United States)

    Zhao, Bojuan Barbara

    2012-03-01

    This paper introduces a new modified Lee-Carter model for analysing short-base-period mortality data, for which the original Lee-Carter model produces severely fluctuating predicted age-specific mortality. Approximating the unknown parameters in the modified model by linearized cubic splines and other additive functions, the model can be simplified into a logistic regression when fitted to binomial data. The expected death rate estimated from the modified model is smooth, not only over ages but also over years. The analysis of mortality data in China (2000-08) demonstrates the advantages of the new model over existing models.

  2. Forecasting hypoxia in the Chesapeake Bay and Gulf of Mexico: model accuracy, precision, and sensitivity to ecosystem change

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Mary Anne; Scavia, Donald, E-mail: mevans@umich.edu, E-mail: scavia@umich.edu [School of Natural Resources and Environment, University of Michigan, Ann Arbor, MI 48109 (United States)

    2011-01-15

    Increasing use of ecological models for management and policy requires robust evaluation of model precision, accuracy, and sensitivity to ecosystem change. We conducted such an evaluation of hypoxia models for the northern Gulf of Mexico and Chesapeake Bay using hindcasts of historical data, comparing several approaches to model calibration. For both systems we find that model sensitivity and precision can be optimized and model accuracy maintained within reasonable bounds by calibrating the model to relatively short, recent 3 year datasets. Model accuracy was higher for Chesapeake Bay than for the Gulf of Mexico, potentially indicating the greater importance of unmodeled processes in the latter system. Retrospective analyses demonstrate both directional and variable changes in sensitivity of hypoxia to nutrient loads.

  3. Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Wainwright, Haruko Murakami [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Univ. of California, Berkeley, CA (United States); Finsterle, Stefan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Univ. of California, Berkeley, CA (United States)

    2016-07-15

    This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.

  4. Phenotypic and genetic analyses of the Varroa Sensitive Hygienic trait in Russian Honey Bee (Hymenoptera: Apidae) colonies

    Science.gov (United States)

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene and more specific VarroaVarroa Sensitive Hygiene (VSH) provide resistance toward the Varroa mite in a number of stocks. In this study, Russian (RHB) and Italian honey bees were assessed for the VSH trait. Two...

  5. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p plaster models using the caliper and from the digital models using O3d software were identical.

  6. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    Science.gov (United States)

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  7. Sensitivity Analysis of the ALMANAC Model's Input Variables

    Institute of Scientific and Technical Information of China (English)

    XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da

    2002-01-01

    Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.

  8. A simple method for modeling dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Son, Min-Kyu [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of); Seo, Hyunwoong [Graduate School of Information Science and Electrical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan); Center of Plasma Nano-interface Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan); Lee, Kyoung-Jun; Kim, Soo-Kyoung; Kim, Byung-Man; Park, Songyi; Prabakar, Kandasamy [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of); Kim, Hee-Je, E-mail: heeje@pusan.ac.kr [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of)

    2014-03-03

    Dye-sensitized solar cells (DSCs) are photoelectrochemical photovoltaics based on complicated electrochemical reactions. The modeling and simulation of DSCs are powerful tools for evaluating the performance of DSCs according to a range of factors. Many theoretical methods are used to simulate DSCs. On the other hand, these methods are quite complicated because they are based on a difficult mathematical formula. Therefore, this paper suggests a simple and accurate method for the modeling and simulation of DSCs without complications. The suggested simulation method is based on extracting the coefficient from representative cells and a simple interpolation method. This simulation method was implemented using the power electronic simulation program and C-programming language. The performance of DSCs according to the TiO{sub 2} thickness was simulated, and the simulated results were compared with the experimental data to confirm the accuracy of this simulation method. The suggested modeling strategy derived the accurate current–voltage characteristics of the DSCs according to the TiO{sub 2} thickness with good agreement between the simulation and the experimental results. - Highlights: • Simple modeling and simulation method for dye-sensitized solar cells (DSCs). • Modeling done using a power electronic simulation program and C-programming language. • The performance of DSC according to the TiO{sub 2} thickness was simulated. • Simulation and experimental performance of DSCs were compared. • This method is suitable for accurate simulation of DSCs.

  9. Quantifying sensitivity to droughts – an experimental modeling approach

    Directory of Open Access Journals (Sweden)

    M. Staudinger

    2014-07-01

    Full Text Available Meteorological droughts like those in summer 2003 or spring 2011 in Europe are expected to become more frequent in the future. Although the spatial extent of these drought events was large, not all regions were affected in the same way. Many catchments reacted strongly to the meteorological droughts showing low levels of streamflow and groundwater, while others hardly reacted. The extent of the hydrological drought for specific catchments was also different between these two historical events due to different initial conditions and drought propagation processes. This leads to the important question of how to detect and quantify the sensitivity of a catchment to meteorological droughts. To assess this question we designed hydrological model experiments using a conceptual rainfall–runoff model. Two drought scenarios were constructed by selecting precipitation and temperature observations based on certain criteria: one scenario was a modest but constant progression of drying based on sorting the years of observations according to annual precipitation amounts. The other scenario was a more extreme progression of drying based on selecting months from different years, forming a year with the wettest months through to a year with the driest months. Both scenarios retained the typical intra-annual seasonality for the region. The sensitivity of 24 Swiss catchments to these scenarios was evaluated by analyzing the simulated discharge time series and modeled storages. Mean catchment elevation, slope and size were found to be the main controls on the sensitivity of catchment discharge to precipitation. Generally, catchments at higher elevation and with steeper slopes seemed to be less sensitive to meteorological droughts than catchments at lower elevations with less steep slopes.

  10. A sensitivity analysis of the WIPP disposal room model: Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Labreche, D.A.; Beikmann, M.A. [RE/SPEC, Inc., Albuquerque, NM (United States); Osnes, J.D. [RE/SPEC, Inc., Rapid City, SD (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States)

    1995-07-01

    The WIPP Disposal Room Model (DRM) is a numerical model with three major components constitutive models of TRU waste, crushed salt backfill, and intact halite -- and several secondary components, including air gap elements, slidelines, and assumptions on symmetry and geometry. A sensitivity analysis of the Disposal Room Model was initiated on two of the three major components (waste and backfill models) and on several secondary components as a group. The immediate goal of this component sensitivity analysis (Phase I) was to sort (rank) model parameters in terms of their relative importance to model response so that a Monte Carlo analysis on a reduced set of DRM parameters could be performed under Phase II. The goal of the Phase II analysis will be to develop a probabilistic definition of a disposal room porosity surface (porosity, gas volume, time) that could be used in WIPP Performance Assessment analyses. This report documents a literature survey which quantifies the relative importance of the secondary room components to room closure, a differential analysis of the creep consolidation model and definition of a follow-up Monte Carlo analysis of the model, and an analysis and refitting of the waste component data on which a volumetric plasticity model of TRU drum waste is based. A summary, evaluation of progress, and recommendations for future work conclude the report.

  11. Analyses and simulations in income frame regulation model for the network sector from 2007; Analyser og simuleringer i inntektsrammereguleringsmodellen for nettbransjen fra 2007

    Energy Technology Data Exchange (ETDEWEB)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-07-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  12. Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet

    Directory of Open Access Journals (Sweden)

    Mariona Camps-Bossacoma

    2017-01-01

    Full Text Available Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.

  13. Considerations for parameter optimization and sensitivity in climate models.

    Science.gov (United States)

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models.

  14. Semantic-Sensitive Web Information Retrieval Model for HTML Documents

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    With the advent of the Internet, a new era of digital information exchange has begun. Currently, the Internet encompasses more than five billion online sites and this number is exponentially increasing every day. Fundamentally, Information Retrieval (IR) is the science and practice of storing documents and retrieving information from within these documents. Mathematically, IR systems are at the core based on a feature vector model coupled with a term weighting scheme that weights terms in a document according to their significance with respect to the context in which they appear. Practically, Vector Space Model (VSM), Term Frequency (TF), and Inverse Term Frequency (IDF) are among other long-established techniques employed in mainstream IR systems. However, present IR models only target generic-type text documents, in that, they do not consider specific formats of files such as HTML web documents. This paper proposes a new semantic-sensitive web information retrieval model for HTML documents. It consists of a...

  15. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  16. Knee model sensitivity to cruciate ligaments parameters: a stability simulation study for a living subject.

    Science.gov (United States)

    Bertozzi, Luigi; Stagni, Rita; Fantozzi, Silvia; Cappello, Angelo

    2007-01-01

    If the biomechanic function of the different anatomical sub-structures of the knee joint was needed in physiological conditions, the only possible way is a modelling approach. Subject-specific geometries and kinematic data, acquired from the same living subject, were the foundations of the 3D quasi-static knee model developed. Each cruciate ligament was modelled by means of 25 elastic springs, paying attention to the anatomical twisting of the fibres. The sensitivity of the model to the cross-sectional area was performed during the anterior/posterior tibial translations, the sensitivity to all the cruciate ligaments parameters was performed during the internal/external rotations. The model reproduced very well the mechanical behaviour reported in literature during anterior/posterior translations, in particular considering 30% of the mean insertional area. During the internal/external tibial rotations, similar behaviour of the axial torques was obtained in the three sensitivity analyses. The overlapping of the ligaments was assessed at about 25 degrees of internal axial rotation. The presented model featured a good level of accuracy in combination with a low computational weight, and it could provide an in vivo estimation of the role of the cruciate ligaments during the execution of daily living activities.

  17. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  18. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  19. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?: NUDGING AND MODEL SENSITIVITIES

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Wan, Hui [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Zhang, Kai [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA; Ghan, Steven J. [Pacific Northwest National Laboratory, Atmospheric Science and Global Change Division, Richland Washington USA

    2016-07-10

    Efficient simulation strategies are crucial for the development and evaluation of high resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity and computational efficiency of the constrained simulations depend strongly on 3 factors: the detailed implementation of nudging, the mechanism through which the perturbed parameter affects precipitation and cloud, and the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature and/or wind nudging with a 6-hour relaxation time scale leads to non-negligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while a 1-year free running simulation can satisfactorily capture the annual mean precipitation sensitivity in terms of both global average and geographical distribution. In the case of a relatively weak perturbation the large-scale condensation scheme, results from 1-year free-running simulations are strongly affected by noise associated with internal variability, while nudging winds effectively reduces the noise, and reasonably reproduces the response of precipitation and cloud forcing to parameter perturbation. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  20. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  1. A Fast, Accurate and Sensitive GC-FID Method for the Analyses of Glycols in Water and Urine

    Science.gov (United States)

    Kuo, C. Mike; Alverson, James T.; Gazda, Daniel B.

    2017-01-01

    Glycols, specifically ethylene glycol and 1,2-propanediol, are some of the major organic compounds found in the humidity condensate samples collected on the International Space Station. The current analytical method for glycols is a GC/MS method with direct sample injection. This method is simple and fast, but it is not very sensitive. Reporting limits for ethylene glycol and 1,2-propanediol are only 1 ppm. A much more sensitive GC/FID method was developed, in which glycols were derivatized with benzoyl chloride for 10 minutes before being extracted with hexane. Using 1,3-propanediol as an internal standard, the detection limits for the GC/FID method was determined to be 50 ppb and the analysis only takes 7 minutes. Data from the GC/MS and the new GC/FID methods shows excellent agreement with each other. Factors affecting the sensitivity, including sample volume, NaOH concentration and volume, volume of benzoyl chloride, reaction time and temperature, were investigated. Interferences during derivatization and possible method to reduce interferences were also investigated.

  2. Sensitivity in forward modeled hyperspectral reflectance due to phytoplankton groups

    Science.gov (United States)

    Manzo, Ciro; Bassani, Cristiana; Pinardi, Monica; Giardino, Claudia; Bresciani, Mariano

    2016-04-01

    Phytoplankton is an integral part of the ecosystem, affecting trophic dynamics, nutrient cycling, habitat condition, and fisheries resources. The types of phytoplankton and their concentrations are used to describe the status of water and the processes inside of this. This study investigates bio-optical modeling of phytoplankton functional types (PFT) in terms of pigment composition demonstrating the capability of remote sensing to recognize freshwater phytoplankton. In particular, a sensitivity analysis of simulated hyperspectral water reflectance (with band setting of HICO, APEX, EnMAP, PRISMA and Sentinel-3) of productive eutrophic waters of Mantua lakes (Italy) environment is presented. The bio-optical model adopted for simulating the hyperspectral water reflectance takes into account the reflectance dependency on geometric conditions of light field, on inherent optical properties (backscattering and absorption coefficients) and on concentrations of water quality parameters (WQPs). The model works in the 400-750nm wavelength range, while the model parametrization is based on a comprehensive dataset of WQP concentrations and specific inherent optical properties of the study area, collected in field surveys carried out from May to September of 2011 and 2014. The following phytoplankton groups, with their specific absorption coefficients, a*Φi(λ), were used during the simulation: Chlorophyta, Cyanobacteria with phycocyanin, Cyanobacteria and Cryptophytes with phycoerythrin, Diatoms with carotenoids and mixed phytoplankton. The phytoplankton absorption coefficient aΦ(λ) is modelled by multiplying the weighted sum of the PFTs, Σpia*Φi(λ), with the chlorophyll-a concentration (Chl-a). To highlight the variability of water reflectance due to variation of phytoplankton pigments, the sensitivity analysis was performed by keeping constant the WQPs (i.e., Chl-a=80mg/l, total suspended matter=12.58g/l and yellow substances=0.27m-1). The sensitivity analysis was

  3. A Workflow for Global Sensitivity Analysis of PBPK Models

    Directory of Open Access Journals (Sweden)

    Kevin eMcNally

    2011-06-01

    Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.

  4. Temperature sensitivity of a numerical pollen forecast model

    Science.gov (United States)

    Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone

    2016-04-01

    Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.

  5. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...... are the most significant in each case. We apply the Sobol method, which is a quantitative method that gives the percentage of the total output variance that each parameter accounts for. The most important parameter is found to be the energy release rate that explains 92% of the uncertainty in the calculated...... results for the period before thermal penetration (tp) has occurred. The analysis is also done for all combinations of two parameters in order to find the combination with the largest effect. The Sobol total for pairs had the highest value for the combination of energy release rate and area of opening...

  6. Local and Nonlocal Impacts of Soil Moisture Initialization on AGCM Seasonal Forecasts: A Model Sensitivity Study.

    Science.gov (United States)

    Zhang, H.; Frederiksen, C. S.

    2003-07-01

    Using a version of the Australian Bureau of Meteorology Research Centre (BMRC) atmospheric general circulation model, this study investigates the model's sensitivity to different soil moisture initial conditions in its dynamically extended seasonal forecasts of June-August 1998 climate anomalies, with focus on the south and northeast China regions where severe floods occurred. The authors' primary aim is to understand the model's responses to different soil moisture initial conditions in terms of the physical and dynamical processes involved. Due to a lack of observed global soil moisture data, the efficacy of using soil moisture anomalies derived from the NCEP-NCAR reanalysis is assessed. Results show that by imposing soil moisture percentile anomalies derived from the reanalysis data into the BMRC model initial condition, the regional features of the model's simulation of seasonal precipitation and temperature anomalies are modulated. Further analyses reveal that the impacts of soil moisture conditions on the model's surface temperature forecasts are mainly from localized interactions between land surface and the overlying atmosphere. In contrast, the model's sensitivity in its forecasts of rainfall anomalies is mainly due to the nonlocal impacts of the soil moisture conditions. Over the monsoon-dominated east Asian region, the contribution from local water recycling, through surface evaporation, to the model simulation of precipitation is limited. Rather, it is the horizontal moisture transport by the regional atmospheric circulation that is the dominant factor in controlling the model rainfall. The influence of different soil moisture conditions on the model forecasts of rainfall anomalies is the result of the response of regional circulation to the anomalous soil moisture condition imposed. Results from the BMRC model sensitivity study support similar findings from other model studies that have appeared in recent years and emphasize the importance of improving

  7. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models

    OpenAIRE

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-01-01

    International audience; Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or chan...

  8. Analysing outsourcing policies in an asset management context: a six-stage model

    OpenAIRE

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates the top-down discussion for analysing the degree of outsourcing maintenance. The model is based on the cyclic nature of maintenance. The six-stage model can: (1) give clear statements about the pre...

  9. Integrative mRNA-microRNA analyses reveal novel interactions related to insulin sensitivity in human adipose tissue.

    Science.gov (United States)

    Kirby, Tyler J; Walton, R Grace; Finlin, Brian; Zhu, Beibei; Unal, Resat; Rasouli, Neda; Peterson, Charlotte A; Kern, Philip A

    2016-02-01

    Adipose tissue has profound effects on whole-body insulin sensitivity. However, the underlying biological processes are quite complex and likely multifactorial. For instance, the adipose transcriptome is posttranscriptionally modulated by microRNAs, but the relationship between microRNAs and insulin sensitivity in humans remains to be determined. To this end, we utilized an integrative mRNA-microRNA microarray approach to identify putative molecular interactions that regulate the transcriptome in subcutaneous adipose tissue of insulin-sensitive (IS) and insulin-resistant (IR) individuals. Using the NanoString nCounter Human v1 microRNA Expression Assay, we show that 17 microRNAs are differentially expressed in IR vs. IS. Of these, 16 microRNAs (94%) are downregulated in IR vs. IS, including miR-26b, miR-30b, and miR-145. Using Agilent Human Whole Genome arrays, we identified genes that were predicted targets of miR-26b, miR-30b, and miR-145 and were upregulated in IR subjects. This analysis produced ADAM22, MYO5A, LOX, and GM2A as predicted gene targets of these microRNAs. We then validated that miR-145 and miR-30b regulate these mRNAs in differentiated human adipose stem cells. We suggest that use of bioinformatic integration of mRNA and microRNA arrays yields verifiable mRNA-microRNA pairs that are associated with insulin resistance and can be validated in vitro. Copyright © 2016 the American Physiological Society.

  10. A sensitive venous bleeding model in haemophilia A mice

    DEFF Research Database (Denmark)

    Pastoft, Anne Engedahl; Lykkesfeldt, Jens; Ezban, M.

    2012-01-01

    Haemostatic effect of compounds for treating haemophilia can be evaluated in various bleeding models in haemophilic mice. However, the doses of factor VIII (FVIII) for normalizing bleeding used in some of these models are reported to be relatively high. The aim of this study was to establish...... a sensitive venous bleeding model in FVIII knock out (F8-KO) mice, with the ability to detect effect on bleeding at low plasma FVIII concentrations. We studied the effect of two recombinant FVIII products, N8 and Advate(®), after injury to the saphenous vein. We found that F8-KO mice treated with increasing...... doses of either N8 or Advate(®) showed a dose-dependent increase in the number of clot formations and a reduction in both average and maximum bleeding time, as well as in average blood loss. For both compounds, significant effect was found at doses as low as 5 IU kg(-1) when compared with vehicle...

  11. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  12. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  13. Geographical variation of sporadic Legionnaires' disease analysed in a grid model

    DEFF Research Database (Denmark)

    Rudbeck, M.; Jepsen, Martin Rudbeck; Sonne, I.B.;

    2010-01-01

    clusters. Four cells had excess incidence in all three time periods. The analysis in 25 different grid positions indicated a low risk of overlooking cells with excess incidence in a random grid. The coefficient of variation ranged from 0.08 to 0.11 independent of the threshold. By application of a random......The aim was to analyse variation in incidence of sporadic Legionnaires' disease in a geographical information system in three time periods (1990-2005) by the application of a grid model and to assess the model's validity by analysing variation according to grid position. Coordinates...

  14. Isoprene emissions modelling for West Africa: MEGAN model evaluation and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Ferreira

    2010-09-01

    Full Text Available Isoprene emissions are the largest source of reactive carbon to the atmosphere, with the tropics being a major source region. These natural emissions are expected to change with changing climate and human impact on land use. As part of the African Monsoon Multidisciplinary Analyses (AMMA project the Model of Emissions of Gases and Aerosols from Nature (MEGAN has been used to estimate the spatial and temporal distribution of isoprene emissions over the West African region. During the AMMA field campaign, carried out in July and August 2006, isoprene mixing ratios were measured on board the FAAM BAe-146 aircraft. These data have been used to make a qualitative evaluation of the model performance.

    MEGAN was firstly applied to a large area covering much of West Africa from the Gulf of Guinea in the south to the desert in the north and was able to capture the large scale spatial distribution of isoprene emissions as inferred from the observed isoprene mixing ratios. In particular the model captures the transition from the forested area in the south to the bare soils in the north, but some discrepancies have been identified over the bare soil, mainly due to the emission factors used. Sensitivity analyses were performed to assess the model response to changes in driving parameters, namely Leaf Area Index (LAI, Emission Factors (EF, temperature and solar radiation.

    A high resolution simulation was made of a limited area south of Niamey, Niger, where the higher concentrations of isoprene were observed. This is used to evaluate the model's ability to simulate smaller scale spatial features and to examine the influence of the driving parameters on an hourly basis through a case study of a flight on 17 August 2006.

    This study highlights the complex interactions between land surface processes and the meteorological dynamics and chemical composition of the PBL. This has implications for quantifying the impact of biogenic emissions

  15. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  16. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  17. A Sensitivity Study of the Validation of Three Regulatory Dispersion Models

    Directory of Open Access Journals (Sweden)

    Keith D.   Harsham

    2008-01-01

    Full Text Available Lidar measurements were made of the dispersion of the plume from a coastal industrial plant over three weeks between September 1996 and May 1998. 67 experimental runs were obtained, mostly of 30 min duration, and these were analysed to provide plume parameters (i.e. height, vertical and lateral spreads. These measurements were supplemented by local meteorological measurements at two portable meteorological stations and also by radiosonde measurements of wind, temperature and pressure profiles. The dispersion was modelled using three commercial regulatory models: ISC3 (EPA, Trinity Consultants and Lakes Environmental, UK-ADMS (CERC and AERMOD (EPA, Lakes Environmental. Where possible, each model was run applying all choices as between urban or rural surface characteristics; wind speed measured at 10 m or 100 m; and surface corrected for topography or topography plus buildings. We have compared the range of output from each model with the Lidar measurements. In the main, the models underestimated dispersion in the near field and overestimated it beyond a few hundred m. ISC tended to show the smallest dispersion, while AERMOD gave the largest values for the lateral spread and ADMS gave the largest values of the vertical spread. Buoyant plume rise was modelled well in neutral conditions but rather erratically in unstable conditions. The models are quite sensitive to the reasonable input choices listed above: the full range of sensitivity is comparable to the difference between the median modelled value and the measured value.

  18. X-ray CT analyses, models and numerical simulations: a comparison with petrophysical analyses in an experimental CO2 study

    Science.gov (United States)

    Henkel, Steven; Pudlo, Dieter; Enzmann, Frieder; Reitenbach, Viktor; Albrecht, Daniel; Ganzer, Leonhard; Gaupp, Reinhard

    2016-06-01

    An essential part of the collaborative research project H2STORE (hydrogen to store), which is funded by the German government, was a comparison of various analytical methods for characterizing reservoir sandstones from different stratigraphic units. In this context Permian, Triassic and Tertiary reservoir sandstones were analysed. Rock core materials, provided by RWE Gasspeicher GmbH (Dortmund, Germany), GDF Suez E&P Deutschland GmbH (Lingen, Germany), E.ON Gas Storage GmbH (Essen, Germany) and RAG Rohöl-Aufsuchungs Aktiengesellschaft (Vienna, Austria), were processed by different laboratory techniques; thin sections were prepared, rock fragments were crushed and cubes of 1 cm edge length and plugs 3 to 5 cm in length with a diameter of about 2.5 cm were sawn from macroscopic homogeneous cores. With this prepared sample material, polarized light microscopy and scanning electron microscopy, coupled with image analyses, specific surface area measurements (after Brunauer, Emmet and Teller, 1938; BET), He-porosity and N2-permeability measurements and high-resolution microcomputer tomography (μ-CT), which were used for numerical simulations, were applied. All these methods were practised on most of the same sample material, before and on selected Permian sandstones also after static CO2 experiments under reservoir conditions. A major concern in comparing the results of these methods is an appraisal of the reliability of the given porosity, permeability and mineral-specific reactive (inner) surface area data. The CO2 experiments modified the petrophysical as well as the mineralogical/geochemical rock properties. These changes are detectable by all applied analytical methods. Nevertheless, a major outcome of the high-resolution μ-CT analyses and following numerical data simulations was that quite similar data sets and data interpretations were maintained by the different petrophysical standard methods. Moreover, the μ-CT analyses are not only time saving, but also non

  19. Understanding earth system models: how Global Sensitivity Analysis can help

    Science.gov (United States)

    Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.

  20. Models for patients' recruitment in clinical trials and sensitivity analysis.

    Science.gov (United States)

    Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas

    2012-07-20

    Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.

  1. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  2. A sensitive and robust HPLC assay with fluorescence detection for the quantification of pomalidomide in human plasma for pharmacokinetic analyses.

    Science.gov (United States)

    Shahbazi, Shandiz; Peer, Cody J; Polizzotto, Mark N; Uldrick, Thomas S; Roth, Jeffrey; Wyvill, Kathleen M; Aleman, Karen; Zeldis, Jerome B; Yarchoan, Robert; Figg, William D

    2014-04-01

    Pomalidomide is a second generation IMiD (immunomodulatory agent) that has recently been granted approval by the Food and Drug Administration for treatment of relapsed multiple myeloma after prior treatment with two antimyeloma agents, including lenalidomide and bortezomib. A simple and robust HPLC assay with fluorescence detection for pomalidomide over the range of 1-500ng/mL has been developed for application to pharmacokinetic studies in ongoing clinical trials in various other malignancies. A liquid-liquid extraction from human plasma alone or pre-stabilized with 0.1% HCl was performed, using propyl paraben as the internal standard. From plasma either pre-stabilized with 0.1% HCl or not, the assay was shown to be selective, sensitive, accurate, precise, and have minimal matrix effects (HPLC-FL assay allows a broader range of laboratories to measure pomalidomide for application to clinical pharmacokinetics.

  3. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  4. High-resolution linkage analyses to identify genes that influence Varroa sensitive hygiene behavior in honey bees.

    Science.gov (United States)

    Tsuruda, Jennifer M; Harris, Jeffrey W; Bourgeois, Lanie; Danka, Robert G; Hunt, Greg J

    2012-01-01

    Varroa mites (V. destructor) are a major threat to honey bees (Apis melilfera) and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL). Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21) and a suggestive QTL on chromosome 1 (LOD = 1.95). The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.

  5. Sensitivity and Cost-benefit Analyses of Emission-constrained Technological Growth Under Uncertainty in Natural Emissions

    OpenAIRE

    Rovenskaya, E.

    2005-01-01

    The paper addresses the issue of control of world technological development under prescribed constraints on the emission of greenhouse gases. We use a stylized mathematical model of the world GDP whose growth leads to the increase of industrial emission provided in investment in "cleaning" technology acts as a control parameter in the model. The optimal control maximizing a standard economic utility index is described. Two components in total emission are distinguished: industrial emission a...

  6. Photosynthesis sensitivity to climate change in land surface models

    Science.gov (United States)

    Manrique-Sunen, Andrea; Black, Emily; Verhoef, Anne; Balsamo, Gianpaolo

    2016-04-01

    Accurate representation of vegetation processes within land surface models is key to reproducing surface carbon, water and energy fluxes. Photosynthesis determines the amount of CO2 fixated by plants as well as the water lost in transpiration through the stomata. Photosynthesis is calculated in land surface models using empirical equations based on plant physiological research. It is assumed that CO2 assimilation is either CO2 -limited, radiation -limited ; and in some models export-limited (the speed at which the products of photosynthesis are used by the plant) . Increased levels of atmospheric CO2 concentration tend to enhance photosynthetic activity, but the effectiveness of this fertilization effect is regulated by environmental conditions and the limiting factor in the photosynthesis reaction. The photosynthesis schemes at the 'leaf level' used by land surface models JULES and CTESSEL have been evaluated against field photosynthesis observations. Also, the response of photosynthesis to radiation, atmospheric CO2 and temperature has been analysed for each model, as this is key to understanding the vegetation response that climate models using these schemes are able to reproduce. Particular emphasis is put on the limiting factor as conditions vary. It is found that while at present day CO2 concentrations export-limitation is only relevant at low temperatures, as CO2 levels rise it becomes an increasingly important restriction on photosynthesis.

  7. Open-circuit sensitivity model based on empirical parameters for a capacitive-type MEMS acoustic sensor

    Science.gov (United States)

    Lee, Jaewoo; Jeon, J. H.; Je, C. H.; Lee, S. Q.; Yang, W. S.; Lee, S.-G.

    2016-03-01

    An empirical-based open-circuit sensitivity model for a capacitive-type MEMS acoustic sensor is presented. To intuitively evaluate the characteristic of the open-circuit sensitivity, the empirical-based model is proposed and analysed by using a lumped spring-mass model and a pad test sample without a parallel plate capacitor for the parasitic capacitance. The model is composed of three different parameter groups: empirical, theoretical, and mixed data. The empirical residual stress from the measured pull-in voltage of 16.7 V and the measured surface topology of the diaphragm were extracted as +13 MPa, resulting in the effective spring constant of 110.9 N/m. The parasitic capacitance for two probing pads including the substrate part was 0.25 pF. Furthermore, to verify the proposed model, the modelled open-circuit sensitivity was compared with the measured value. The MEMS acoustic sensor had an open- circuit sensitivity of -43.0 dBV/Pa at 1 kHz with a bias of 10 V, while the modelled open- circuit sensitivity was -42.9 dBV/Pa, which showed good agreement in the range from 100 Hz to 18 kHz. This validates the empirical-based open-circuit sensitivity model for designing capacitive-type MEMS acoustic sensors.

  8. The Civitavecchia Coastal Environment Monitoring System (C-CEMS: a new tool to analyse the conflicts between coastal pressures and sensitivity areas

    Directory of Open Access Journals (Sweden)

    S. Bonamano

    2015-07-01

    Full Text Available The understanding of the coastal environment is fundamental for efficiently and effectively facing the pollution phenomena, as expected by Marine Strategy Directive, which is focused on the achievement of Good Environmental Status (GES by all Member States by 2020. To address this, the Laboratory of Experimental Oceanology and Marine Ecology developed a multi-platform observing network that has been in operation since 2005 in the coastal marine area of Civitavecchia, where multiple uses and high ecological values closely coexist. The Civitavecchia Coastal Environment Monitoring System (C-CEMS, implemented in the current configuration, includes various modules that provide integrated information to be used in different fields of the environmental research. The long term observations acquired by the fixed stations are integrated by in situ surveys, periodically carried out for the monitoring of the physical, chemical and biological characteristics of the water column and marine sediments, as well as of the benthic biota. The in situ data, integrated with satellite observations (e.g., temperature, chlorophyll a and TSM, are used to feed and validate the numerical models, which allow analyses and forecasting of the dynamics of conservative and non-conservative particles under different conditions. As examples of C-CEMS applications, two case studies are reported in this work: (1 the analysis of faecal bacteria dispersion for bathing water quality assessment and, (2 the evaluation of the effects of the dredged activities on Posidonia meadows, which make up most of the two sites of community importance located along the Civitavecchia coastal zone. The simulations results are combined with Posidonia oceanica distribution and bathing areas presence in order to resolve the conflicts between coastal uses (in terms of stress produced by anthropic activities and sensitivity areas management.

  9. A Bayesian model of context-sensitive value attribution.

    Science.gov (United States)

    Rigoli, Francesco; Friston, Karl J; Martinelli, Cristina; Selaković, Mirjana; Shergill, Sukhwinder S; Dolan, Raymond J

    2016-06-22

    Substantial evidence indicates that incentive value depends on an anticipation of rewards within a given context. However, the computations underlying this context sensitivity remain unknown. To address this question, we introduce a normative (Bayesian) account of how rewards map to incentive values. This assumes that the brain inverts a model of how rewards are generated. Key features of our account include (i) an influence of prior beliefs about the context in which rewards are delivered (weighted by their reliability in a Bayes-optimal fashion), (ii) the notion that incentive values correspond to precision-weighted prediction errors, (iii) and contextual information unfolding at different hierarchical levels. This formulation implies that incentive value is intrinsically context-dependent. We provide empirical support for this model by showing that incentive value is influenced by context variability and by hierarchically nested contexts. The perspective we introduce generates new empirical predictions that might help explaining psychopathologies, such as addiction.

  10. Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions

    CERN Document Server

    Bab, Sebastian; 10.4204/EPTCS.83.2

    2012-01-01

    The concept of dynamic coalitions (also virtual organizations) describes the temporary interconnection of autonomous agents, who share information or resources in order to achieve a common goal. Through modern technologies these coalitions may form across company, organization and system borders. Therefor questions of access control and security are of vital significance for the architectures supporting these coalitions. In this paper, we present our first steps to reach a formal framework for modeling and verifying the design of privacy-sensitive dynamic coalition infrastructures and their processes. In order to do so we extend existing dynamic coalition modeling approaches with an access-control-concept, which manages access to information through policies. Furthermore we regard the processes underlying these coalitions and present first works in formalizing these processes. As a result of the present paper we illustrate the usefulness of the Abstract State Machine (ASM) method for this task. We demonstrate...

  11. Smart licensing and environmental flows: Modeling framework and sensitivity testing

    Science.gov (United States)

    Wilby, R. L.; Fenn, C. R.; Wood, P. J.; Timlett, R.; Lequesne, T.

    2011-12-01

    Adapting to climate change is just one among many challenges facing river managers. The response will involve balancing the long-term water demands of society with the changing needs of the environment in sustainable and cost effective ways. This paper describes a modeling framework for evaluating the sensitivity of low river flows to different configurations of abstraction licensing under both historical climate variability and expected climate change. A rainfall-runoff model is used to quantify trade-offs among environmental flow (e-flow) requirements, potential surface and groundwater abstraction volumes, and the frequency of harmful low-flow conditions. Using the River Itchen in southern England as a case study it is shown that the abstraction volume is more sensitive to uncertainty in the regional climate change projection than to the e-flow target. It is also found that "smarter" licensing arrangements (involving a mix of hands off flows and "rising block" abstraction rules) could achieve e-flow targets more frequently than conventional seasonal abstraction limits, with only modest reductions in average annual yield, even under a hotter, drier climate change scenario.

  12. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    Science.gov (United States)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  13. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    CERN Document Server

    Piparo, D; Quast, Prof G

    2008-01-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  14. Combined Task and Physical Demands Analyses towards a Comprehensive Human Work Model

    Science.gov (United States)

    2014-09-01

    velocities, and accelerations over time for each postural sequence. Neck strain measures derived from biomechanical analyses of these postural...and whole missions. The result is a comprehensive model of tasks and associated physical demands from which one can estimate the accumulative neck ...Griffon Helicopter aircrew (Pilots and Flight Engineers) reported neck pain particularly when wearing Night Vision Goggles (NVGs) (Forde et al. , 2011

  15. Dutch AG-MEMOD model; A tool to analyse the agri-food sector

    NARCIS (Netherlands)

    Leeuwen, van M.G.A.; Tabeau, A.A.

    2005-01-01

    Agricultural policies in the European Union (EU) have a history of continuous reform. AG-MEMOD, acronym for Agricultural sector in the Member states and EU: econometric modelling for projections and analysis of EU policies on agriculture, forestry and the environment, provides a system for analysing

  16. Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and Implications for Further Analyses

    Science.gov (United States)

    2015-04-01

    analysis. 15. SUBJECT TERMS supply chain , model, fluorspar, hydrofluoric acid, shortfall, substitution, Defense Logistics Agency, National Defense...unlimited. IDA Document D-5379 Log: H 15-000099 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Supply Chain ...E F E N S E A N A L Y S E S IDA Document D-5379 D. Sean Barnett Jerome Bracken Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and

  17. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  19. Wavelet-based spatial comparison technique for analysing and evaluating two-dimensional geophysical model fields

    Directory of Open Access Journals (Sweden)

    S. Saux Picart

    2011-11-01

    Full Text Available Complex numerical models of the Earth's environment, based around 3-D or 4-D time and space domains are routinely used for applications including climate predictions, weather forecasts, fishery management and environmental impact assessments. Quantitatively assessing the ability of these models to accurately reproduce geographical patterns at a range of spatial and temporal scales has always been a difficult problem to address. However, this is crucial if we are to rely on these models for decision making. Satellite data are potentially the only observational dataset able to cover the large spatial domains analysed by many types of geophysical models. Consequently optical wavelength satellite data is beginning to be used to evaluate model hindcast fields of terrestrial and marine environments. However, these satellite data invariably contain regions of occluded or missing data due to clouds, further complicating or impacting on any comparisons with the model. A methodology has recently been developed to evaluate precipitation forecasts using radar observations. It allows model skill to be evaluated at a range of spatial scales and rain intensities. Here we extend the original method to allow its generic application to a range of continuous and discontinuous geophysical data fields, and therefore allowing its use with optical satellite data. This is achieved through two major improvements to the original method: (i all thresholds are determined based on the statistical distribution of the input data, so no a priori knowledge about the model fields being analysed is required and (ii occluded data can be analysed without impacting on the metric results. The method can be used to assess a model's ability to simulate geographical patterns over a range of spatial scales. We illustrate how the method provides a compact and concise way of visualising the degree of agreement between spatial features in two datasets. The application of the new method, its

  20. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    Science.gov (United States)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  1. A Model for Integrating Fixed-, Random-, and Mixed-Effects Meta-Analyses into Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W.-L.

    2008-01-01

    Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an…

  2. Stellar abundance analyses in the light of 3D hydrodynamical model atmospheres

    CERN Document Server

    Asplund, M

    2003-01-01

    I describe recent progress in terms of 3D hydrodynamical model atmospheres and 3D line formation and their applications to stellar abundance analyses of late-type stars. Such 3D studies remove the free parameters inherent in classical 1D investigations (mixing length parameters, macro- and microturbulence) yet are highly successful in reproducing a large arsenal of observational constraints such as detailed line shapes and asymmetries. Their potential for abundance analyses is illustrated by discussing the derived oxygen abundances in the Sun and in metal-poor stars, where they seem to resolve long-standing problems as well as significantly alter the inferred conclusions.

  3. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  4. Application of an approximate vectorial diffraction model to analysing diffractive micro-optical elements

    Institute of Scientific and Technical Information of China (English)

    Niu Chun-Hui; Li Zhi-Yuan; Ye Jia-Sheng; Gu Ben-Yuan

    2005-01-01

    Scalar diffraction theory, although simple and efficient, is too rough for analysing diffractive micro-optical elements.Rigorous vectorial diffraction theory requires extensive numerical efforts, and is not a convenient design tool. In this paper we employ a simple approximate vectorial diffraction model which combines the principle of the scalar diffraction theory with an approximate local field model to analyse the diffraction of optical waves by some typical two-dimensional diffractive micro-optical elements. The TE and TM polarization modes are both considered. We have found that the approximate vectorial diffraction model can agree much better with the rigorous electromagnetic simulation results than the scalar diffraction theory for these micro-optical elements.

  5. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  6. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  7. A fluid dynamics multidimensional model of biofilm growth: stability, influence of environment and sensitivity

    CERN Document Server

    Clarelli, Fabrizio; Natalini, Roberto; Ribot, Magali

    2014-01-01

    In this article, we study in details the fluid dynamics system proposed in Clarelli et al (2013) to model the formation of cyanobacteria biofilms. After analyzing the linear stability of the unique non trivial equilibrium of the system, we introduce in the model the influence of light and temperature, which are two important factors for the development of cyanobacteria biofilm. Since the values of the coefficients we use for our simulations are estimated through information found in the literature, some sensitivity and robustness analyses on these parameters are performed. All these elements enable us to control and to validate the model we have already derived and to present some numerical simulations in the 2D and the 3D cases.

  8. Parameter sensitivity in satellite-gravity-constrained geothermal modelling

    Science.gov (United States)

    Pastorutti, Alberto; Braitenberg, Carla

    2017-04-01

    The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.

  9. Dense Molecular Gas: A Sensitive Probe of Stellar Feedback Models

    CERN Document Server

    Hopkins, Philip F; Murray, Norman; Quataert, Eliot

    2012-01-01

    We show that the mass fraction of GMC gas (n>100 cm^-3) in dense (n>>10^4 cm^-3) star-forming clumps, observable in dense molecular tracers (L_HCN/L_CO(1-0)), is a sensitive probe of the strength and mechanism(s) of stellar feedback. Using high-resolution galaxy-scale simulations with pc-scale resolution and explicit models for feedback from radiation pressure, photoionization heating, stellar winds, and supernovae (SNe), we make predictions for the dense molecular gas tracers as a function of GMC and galaxy properties and the efficiency of stellar feedback. In models with weak/no feedback, much of the mass in GMCs collapses into dense sub-units, predicting L_HCN/L_CO(1-0) ratios order-of-magnitude larger than observed. By contrast, models with feedback properties taken directly from stellar evolution calculations predict dense gas tracers in good agreement with observations. Changing the strength or timing of SNe tends to move systems along, rather than off, the L_HCN-L_CO relation (because SNe heat lower-de...

  10. Global sensitivity analysis of the GEOS-Chem chemical transport model: ozone and hydrogen oxides during ARCTAS (2008)

    Science.gov (United States)

    Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu

    2017-03-01

    Developing predictive capability for future atmospheric oxidation capacity requires a detailed analysis of model uncertainties and sensitivity of the modeled oxidation capacity to model input variables. Using oxidant mixing ratios modeled by the GEOS-Chem chemical transport model and measured on the NASA DC-8 aircraft, uncertainty and global sensitivity analyses were performed on the GEOS-Chem chemical transport model for the modeled oxidants hydroxyl (OH), hydroperoxyl (HO2), and ozone (O3). The sensitivity of modeled OH, HO2, and ozone to model inputs perturbed simultaneously within their respective uncertainties were found for the flight tracks of NASA's Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) A and B campaigns (2008) in the North American Arctic. For the spring deployment (ARCTAS-A), ozone was most sensitive to the photolysis rate of NO2, the NO2 + OH reaction rate, and various emissions, including methyl bromoform (CHBr3). OH and HO2 were overwhelmingly sensitive to aerosol particle uptake of HO2 with this one factor contributing upwards of 75 % of the uncertainty in HO2. For the summer deployment (ARCTAS-B), ozone was most sensitive to emission factors, such as soil NOx and isoprene. OH and HO2 were most sensitive to biomass emissions and aerosol particle uptake of HO2. With modeled HO2 showing a factor of 2 underestimation compared to measurements in the lowest 2 km of the troposphere, lower uptake rates (γHO2 < 0. 055), regardless of whether or not the product of the uptake is H2O or H2O2, produced better agreement between modeled and measured HO2.

  11. Climate forcings and climate sensitivities diagnosed from atmospheric global circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Bruce T. [Boston University, Department of Geography and Environment, Boston, MA (United States); Knight, Jeff R.; Ringer, Mark A. [Met Office Hadley Centre, Exeter (United Kingdom); Deser, Clara; Phillips, Adam S. [National Center for Atmospheric Research, Boulder, CO (United States); Yoon, Jin-Ho [University of Maryland, Cooperative Institute for Climate and Satellites, Earth System Science Interdisciplinary Center, College Park, MD (United States); Cherchi, Annalisa [Centro Euro-Mediterraneo per i Cambiamenti Climatici, and Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy)

    2010-12-15

    Understanding the historical and future response of the global climate system to anthropogenic emissions of radiatively active atmospheric constituents has become a timely and compelling concern. At present, however, there are uncertainties in: the total radiative forcing associated with changes in the chemical composition of the atmosphere; the effective forcing applied to the climate system resulting from a (temporary) reduction via ocean-heat uptake; and the strength of the climate feedbacks that subsequently modify this forcing. Here a set of analyses derived from atmospheric general circulation model simulations are used to estimate the effective and total radiative forcing of the observed climate system due to anthropogenic emissions over the last 50 years of the twentieth century. They are also used to estimate the sensitivity of the observed climate system to these emissions, as well as the expected change in global surface temperatures once the climate system returns to radiative equilibrium. Results indicate that estimates of the effective radiative forcing and total radiative forcing associated with historical anthropogenic emissions differ across models. In addition estimates of the historical sensitivity of the climate to these emissions differ across models. However, results suggest that the variations in climate sensitivity and total climate forcing are not independent, and that the two vary inversely with respect to one another. As such, expected equilibrium temperature changes, which are given by the product of the total radiative forcing and the climate sensitivity, are relatively constant between models, particularly in comparison to results in which the total radiative forcing is assumed constant. Implications of these results for projected future climate forcings and subsequent responses are also discussed. (orig.)

  12. Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH

    DEFF Research Database (Denmark)

    Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert

    2013-01-01

    In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...

  13. FluxExplorer: A general platform for modeling and analyses of metabolic networks based on stoichiometry

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Stoichiometry-based analyses of meta- bolic networks have aroused significant interest of systems biology researchers in recent years. It is necessary to develop a more convenient modeling platform on which users can reconstruct their network models using completely graphical operations, and explore them with powerful analyzing modules to get a better understanding of the properties of metabolic systems. Herein, an in silico platform, FluxExplorer, for metabolic modeling and analyses based on stoichiometry has been developed as a publicly available tool for systems biology research. This platform integrates various analytic approaches, in- cluding flux balance analysis, minimization of meta- bolic adjustment, extreme pathways analysis, shadow prices analysis, and singular value decom- position, providing a thorough characterization of the metabolic system. Using a graphic modeling process, metabolic networks can be reconstructed and modi- fied intuitively and conveniently. The inconsistencies of a model with respect to the FBA principles can be proved automatically. In addition, this platform sup- ports systems biology markup language (SBML). FluxExplorer has been applied to rebuild a metabolic network in mammalian mitochondria, producing meaningful results. Generally, it is a powerful and very convenient tool for metabolic network modeling and analysis.

  14. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  15. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2008-08-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  16. On the sensitivity of urban hydrodynamic modelling to rainfall spatial and temporal resolution

    Directory of Open Access Journals (Sweden)

    G. Bruni

    2014-06-01

    Full Text Available Cities are increasingly vulnerable to floods generated by intense rainfall, because of their high degree of imperviousness, implementation of infrastructures, and changes in precipitation patterns due to climate change. Accurate information of convective storm characteristics at high spatial and temporal resolution is a crucial input for urban hydrological models to be able to simulate fast runoff processes and enhance flood prediction. In this paper, a detailed study of the sensitivity of urban hydrological response to high resolution radar rainfall was conducted. Rainfall rates derived from X-band dual polarimetric weather radar for four rainstorms were used as input into a detailed hydrodynamic sewer model for an urban catchment in Rotterdam, the Netherlands. Dimensionless parameters were derived to compare results between different storm conditions and to describe the effect of rainfall spatial resolution in relation to storm and hydrodynamic model properties: rainfall sampling number (rainfall resolution vs. storm size, catchment sampling number (rainfall resolution vs. catchment size, runoff and sewer sampling number (rainfall resolution vs. runoff and sewer model resolution respectively. Results show catchment smearing effect for rainfall resolution approaching half the catchment size, i.e. for catchments sampling numbers greater than 0.5 averaged rainfall volumes decrease about 20%. Moreover, deviations in maximum water depths, form 10 to 30% depending on the storm, occur for rainfall resolution close to storm size, describing storm smearing effect due to rainfall coarsening. Model results also show the sensitivity of modelled runoff peaks and maximum water depths to the resolution of the runoff areas and sewer density respectively. Sensitivity to temporal resolution of rainfall input seems low compared to spatial resolution, for the storms analysed in this study. Findings are in agreement with previous studies on natural catchments

  17. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Science.gov (United States)

    Zavala, M.; Lei, W.; Molina, M. J.; Molina, L. T.

    2009-01-01

    The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the

  18. Calibration of back-analysed model parameters for landslides using classification statistics

    Science.gov (United States)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  19. Volvo Logistics Corporation Returnable Packaging System : a model for analysing cost savings when switching packaging system

    OpenAIRE

    2008-01-01

    This thesis is a study for analysing costs affected by packaging in a producing industry. The purpose is to develop a model that will calculate and present possible cost savings for the customer by using Volvo Logistics Corporations, VLC’s, returnable packaging instead of other packaging solutions. The thesis is based on qualitative data gained from both theoretical and empirical studies. The methodology for gaining information has been to study theoretical sources such as course literature a...

  20. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  1. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  2. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  3. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  4. Modeling and performance analyses of evaporators in frozen-food supermarket display cabinets at low temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Getu, H.M.; Bansal, P.K. [Department of Mechanical Engineering, The University of Auckland, Private Bag 92019, Auckland (New Zealand)

    2007-11-15

    This paper presents modeling and experimental analyses of evaporators in 'in situ' frozen-food display cabinets at low temperatures in the supermarket industry. Extensive experiments were conducted to measure store and display cabinet relative humidities and temperatures, and pressures, temperatures and mass flow rates of the refrigerant. The mathematical model adopts various empirical correlations of heat transfer coefficients and frost properties in a fin-tube heat exchanger in order to investigate the influence of indoor conditions on the performance of the display cabinets. The model is validated with the experimental data of 'in situ' cabinets. The model would be a good guide tool to the design engineers to evaluate the performance of supermarket display cabinet heat exchangers under various store conditions. (author)

  5. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  6. Sensitivity analysis for models of greenhouse gas emissions at farm level. Case study of N{sub 2}O emissions simulated by the CERES-EGC model

    Energy Technology Data Exchange (ETDEWEB)

    Drouet, J.-L., E-mail: Jean-Louis.Drouet@grignon.inra.fr [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Capian, N. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Fiorelli, J.-L. [INRA, UR 0055 Agro-Systemes Territoires Ressources (ASTER), F-88500 Mirecourt (France); Blanfort, V. [INRA, UR 0874 Unite de Recherche sur l' Ecosysteme Prairial (UREP), F-63100 Clermont-Ferrand (France); CIRAD, Systemes d' Elevage, F-97387 Kourou (France); Capitaine, M. [ENITA, Agronomie et Fertilite Organique des Sols (AFOS), F-63370 Lempdes (France); Duretz, S.; Gabrielle, B. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Martin, R.; Lardy, R. [INRA, UR 0874 Unite de Recherche sur l' Ecosysteme Prairial (UREP), F-63100 Clermont-Ferrand (France); Cellier, P. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Soussana, J.-F. [INRA, UR 0874 Unite de Recherche sur l' Ecosysteme Prairial (UREP), F-63100 Clermont-Ferrand (France)

    2011-11-15

    Modelling complex systems such as farms often requires quantification of a large number of input factors. Sensitivity analyses are useful to reduce the number of input factors that are required to be measured or estimated accurately. Three methods of sensitivity analysis (the Morris method, the rank regression and correlation method and the Extended Fourier Amplitude Sensitivity Test method) were compared in the case of the CERES-EGC model applied to crops of a dairy farm. The qualitative Morris method provided a screening of the input factors. The two other quantitative methods were used to investigate more thoroughly the effects of input factors on output variables. Despite differences in terms of concepts and assumptions, the three methods provided similar results. Among the 44 factors under study, N{sub 2}O emissions were mainly sensitive to the fraction of N{sub 2}O emitted during denitrification, the maximum rate of nitrification, the soil bulk density and the cropland area. - Highlights: > Three methods of sensitivity analysis were compared in the case of a soil-crop model. > The qualitative Morris method provided a screening of the input factors. > The quantitative EFAST method provided a thorough analysis of the input factors. > The three methods provided similar results regarding sensitivity of N{sub 2}O emissions. > N{sub 2}O emissions were mainly sensitive to a few, especially four, input factors. - Three methods of sensitivity analysis were compared to analyse their efficiency in assessing the sensitivity of a complex soil-crop model to its input factors.

  7. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  8. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Science.gov (United States)

    Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.

    2009-04-01

    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  9. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  10. Analysing adverse events by time-to-event models: the CLEOPATRA study.

    Science.gov (United States)

    Proctor, Tanja; Schumacher, Martin

    2016-07-01

    When analysing primary and secondary endpoints in a clinical trial with patients suffering from a chronic disease, statistical models for time-to-event data are commonly used and accepted. This is in contrast to the analysis of data on adverse events where often only a table with observed frequencies and corresponding test statistics is reported. An example is the recently published CLEOPATRA study where a three-drug regimen is compared with a two-drug regimen in patients with HER2-positive first-line metastatic breast cancer. Here, as described earlier, primary and secondary endpoints (progression-free and overall survival) are analysed using time-to-event models, whereas adverse events are summarized in a simple frequency table, although the duration of study treatment differs substantially. In this paper, we demonstrate the application of time-to-event models to first serious adverse events using the data of the CLEOPATRA study. This will cover the broad range between a simple incidence rate approach over survival and competing risks models (with death as a competing event) to multi-state models. We illustrate all approaches by means of graphical displays highlighting the temporal dynamics and compare the obtained results. For the CLEOPATRA study, the resulting hazard ratios are all in the same order of magnitude. But the use of time-to-event models provides valuable and additional information that would potentially be overlooked by only presenting incidence proportions. These models adequately address the temporal dynamics of serious adverse events as well as death of patients. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Models and analyses for inertial-confinement fusion-reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report.

  12. Sensitivity Analysis Of Hydrological Parameters In Modeling FlowAnd Transport In The Unsaturated Zone Of Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Keni; Wu, Yu-Shu; Houseworth, James E

    2006-02-01

    The unsaturated fractured volcanic deposits at Yucca Mountain in Nevada, USA, have been intensively investigated as a possible repository site for storing high-level radioactive waste. Field studies at the site have revealed that there exist large variabilities in hydrological parameters over the spatial domain of the mountain. Systematic analyses of hydrological parameters using a site-scale three-dimensional unsaturated zone (UZ) flow model have been undertaken. The main objective of the sensitivity analyses was to evaluate the effects of uncertainties in hydrologic parameters on modeled UZ flow and contaminant transport results. Sensitivity analyses were carried out relative to fracture and matrix permeability and capillary strength (van Genuchten {alpha}) through variation of these parameter values by one standard deviation from the base-case values. The parameter variation resulted in eight parameter sets. Modeling results for the eight UZ flow sensitivity cases have been compared with field observed data and simulation results from the base-case model. The effects of parameter uncertainties on the flow fields were evaluated through comparison of results for flow and transport. In general, this study shows that uncertainties in matrix parameters cause larger uncertainty in simulated moisture flux than corresponding uncertainties in fracture properties for unsaturated flow through heterogeneous fractured rock.

  13. Feedbacks, climate sensitivity, and the limits of linear models

    Science.gov (United States)

    Rugenstein, M.; Knutti, R.

    2015-12-01

    The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.

  14. Clinical exchange: one model to achieve culturally sensitive care.

    Science.gov (United States)

    Scholes, J; Moore, D

    2000-03-01

    This paper reports on a clinical exchange programme that formed part of a pre-registration European nursing degree run by three collaborating institutions in England, Holland and Spain. The course included: common and shared learning including two summer schools; and the development of a second language before the students went on a three-month clinical placement in one of the other base institutions' clinical environments. The aim of the course was to enable students to become culturally sensitive carers. This was achieved by developing a programme based on transcultural nursing principles in theory and practice. Data were gathered by interview, focus groups, and questionnaires from 79 exchange students, fostering the strategies of illuminative evaluation. The paper examines: how the aims of the course were met; the factors that inhibited the attainment of certain goals; and how the acquisition of a second language influenced the students' learning about nursing. A model is presented to illustrate the process of transformative learning from the exchange experience.

  15. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  16. Beyond sensitivity

    DEFF Research Database (Denmark)

    Stott, Iain; Hodgson, David James; Townley, Stuart

    2012-01-01

    1. Perturbation analyses of population models are integral to population management: such analyses evaluate how changes in vital rates of members of the population translate to changes in population dynamics. Sensitivity and elasticity analyses of long-term (asymptotic) growth are popular...... formulae for the transfer function of population inertia, which describes nonlinear perturbation curves of transient population dynamics. The method comfortably fits into wider frameworks for analytical study of transient dynamics, and for perturbation analyses that use the transfer function approach. 3....... We use case studies to illustrate how the transfer function of population inertia may be used in population management. These show that strategies based solely on asymptotic perturbation analyses can cause undesirable transient dynamics and/ or fail to exploit desirable transient dynamics...

  17. Dynamics and spatial structure of ENSO from re-analyses versus CMIP5 models

    Science.gov (United States)

    Serykh, Ilya; Sonechkin, Dmitry

    2016-04-01

    Basing on a mathematical idea about the so-called strange nonchaotic attractor (SNA) in the quasi-periodically forced dynamical systems, the currently available re-analyses data are considered. It is found that the El Niño - Southern Oscillation (ENSO) is driven not only by the seasonal heating, but also by three more external periodicities (incommensurate to the annual period) associated with the ~18.6-year lunar-solar nutation of the Earth rotation axis, ~11-year sunspot activity cycle and the ~14-month Chandler wobble in the Earth's pole motion. Because of the incommensurability of their periods all four forces affect the system in inappropriate time moments. As a result, the ENSO time series look to be very complex (strange in mathematical terms) but nonchaotic. The power spectra of ENSO indices reveal numerous peaks located at the periods that are multiples of the above periodicities as well as at their sub- and super-harmonic. In spite of the above ENSO complexity, a mutual order seems to be inherent to the ENSO time series and their spectra. This order reveals itself in the existence of a scaling of the power spectrum peaks and respective rhythms in the ENSO dynamics that look like the power spectrum and dynamics of the SNA. It means there are no limits to forecast ENSO, in principle. In practice, it opens a possibility to forecast ENSO for several years ahead. Global spatial structures of anomalies during El Niño and power spectra of ENSO indices from re-analyses are compared with the respective output quantities in the CMIP5 climate models (the Historical experiment). It is found that the models reproduce global spatial structures of the near surface temperature and sea level pressure anomalies during El Niño very similar to these fields in the re-analyses considered. But the power spectra of the ENSO indices from the CMIP5 models show no peaks at the same periods as the re-analyses power spectra. We suppose that it is possible to improve modeled

  18. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models.

    Science.gov (United States)

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-05-20

    Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or changing between periods. We use the expected standard errors of treatment effect to compute the power for the Wald test of comparison or equivalence and the number of subjects needed for a given power. We perform various simulations mimicking crossover two-period trials to show the relevance of these developments. We then apply these developments to design a crossover pharmacokinetic study of amoxicillin in piglets and implement them in the new version 3.2 of the r function PFIM.

  19. An age-dependent model to analyse the evolutionary stability of bacterial quorum sensing.

    Science.gov (United States)

    Mund, A; Kuttler, C; Pérez-Velázquez, J; Hense, B A

    2016-09-21

    Bacterial communication is enabled through the collective release and sensing of signalling molecules in a process called quorum sensing. Cooperative processes can easily be destabilized by the appearance of cheaters, who contribute little or nothing at all to the production of common goods. This especially applies for planktonic cultures. In this study, we analyse the dynamics of bacterial quorum sensing and its evolutionary stability under two levels of cooperation, namely signal and enzyme production. The model accounts for mutation rates and switches between planktonic and biofilm state of growth. We present a mathematical approach to model these dynamics using age-dependent colony models. We explore the conditions under which cooperation is stable and find that spatial structuring can lead to long-term scenarios such as coexistence or bistability, depending on the non-linear combination of different parameters like death rates and production costs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  1. Assessing Cognitive Processes with Diffusion Model Analyses: A Tutorial based on fast-dm-30

    Directory of Open Access Journals (Sweden)

    Andreas eVoss

    2015-03-01

    Full Text Available Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modelling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss & Voss, 2007 for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov, which differ in applicability for different data sets.

  2. Evaluating a Skin Sensitization Model and Examining Common Assumptions of Skin Sensitizers (QSAR conference)

    Science.gov (United States)

    Skin sensitization is an adverse outcome that has been well studied over many decades. Knowledge of the mechanism of action was recently summarized using the Adverse Outcome Pathway (AOP) framework as part of the OECD work programme (OECD, 2012). Currently there is a strong focus...

  3. Evaluating a Skin Sensitization Model and Examining Common Assumptions of Skin Sensitizers (ASCCT meeting)

    Science.gov (United States)

    Skin sensitization is an adverse outcome that has been well studied over many decades. It was summarized using the adverse outcome pathway (AOP) framework as part of the OECD work programme (OECD, 2012). Currently there is a strong focus on how AOPs can be applied for different r...

  4. Models of population-based analyses for data collected from large extended families.

    Science.gov (United States)

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  5. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  6. Numerical daemons in hydrological modeling: Effects on uncertainty assessment, sensitivity analysis and model predictions

    Science.gov (United States)

    Kavetski, D.; Clark, M. P.; Fenicia, F.

    2011-12-01

    Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated

  7. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    Science.gov (United States)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  8. Analysing animal social network dynamics: the potential of stochastic actor-oriented models.

    Science.gov (United States)

    Fisher, David N; Ilany, Amiyaal; Silk, Matthew J; Tregenza, Tom

    2017-03-01

    Animals are embedded in dynamically changing networks of relationships with conspecifics. These dynamic networks are fundamental aspects of their environment, creating selection on behaviours and other traits. However, most social network-based approaches in ecology are constrained to considering networks as static, despite several calls for such analyses to become more dynamic. There are a number of statistical analyses developed in the social sciences that are increasingly being applied to animal networks, of which stochastic actor-oriented models (SAOMs) are a principal example. SAOMs are a class of individual-based models designed to model transitions in networks between discrete time points, as influenced by network structure and covariates. It is not clear, however, how useful such techniques are to ecologists, and whether they are suited to animal social networks. We review the recent applications of SAOMs to animal networks, outlining findings and assessing the strengths and weaknesses of SAOMs when applied to animal rather than human networks. We go on to highlight the types of ecological and evolutionary processes that SAOMs can be used to study. SAOMs can include effects and covariates for individuals, dyads and populations, which can be constant or variable. This allows for the examination of a wide range of questions of interest to ecologists. However, high-resolution data are required, meaning SAOMs will not be useable in all study systems. It remains unclear how robust SAOMs are to missing data and uncertainty around social relationships. Ultimately, we encourage the careful application of SAOMs in appropriate systems, with dynamic network analyses likely to prove highly informative. Researchers can then extend the basic method to tackle a range of existing questions in ecology and explore novel lines of questioning. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  9. Power analyses for negative binomial models with application to multiple sclerosis clinical trials.

    Science.gov (United States)

    Rettiganti, Mallik; Nagaraja, H N

    2012-01-01

    We use negative binomial (NB) models for the magnetic resonance imaging (MRI)-based brain lesion count data from parallel group (PG) and baseline versus treatment (BVT) trials for relapsing remitting multiple sclerosis (RRMS) patients, and describe the associated likelihood ratio (LR), score, and Wald tests. We perform power analyses and sample size estimation using the simulated percentiles of the exact distribution of the test statistics for the PG and BVT trials. When compared to the corresponding nonparametric test, the LR test results in 30-45% reduction in sample sizes for the PG trials and 25-60% reduction for the BVT trials.

  10. Analysing and modelling battery drain of 3G terminals due to port scan attacks

    OpenAIRE

    Pascual Trigos, Mar

    2010-01-01

    In this thesis there is detected a threat in 3G mobile phone, specifically in the eventual draining terminal's battery due to undesired data traffic. The objectives of the thesis are to analyse the battery drain of 3G mobile phones because of uplink and downlink traffic and to model the battery drain. First of all, there is described how we can make a mobile phone to increase its consumption, and therefore to shorten its battery life time. Concretely, we focus in data traffic. This traffic ca...

  11. Wind climate estimation using WRF model output: method and model sensitivities over the sea

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Vincent, Claire Louise; Peña, Alfredo

    2015-01-01

    setup parameters. The results of the year-long sensitivity simulations show that the long-term mean wind speed simulated by the WRF model offshore in the region studied is quite insensitive to the global reanalysis, the number of vertical levels, and the horizontal resolution of the sea surface......High-quality tall mast and wind lidar measurements over the North and Baltic Seas are used to validate the wind climatology produced from winds simulated by the Weather, Research and Forecasting (WRF) model in analysis mode. Biases in annual mean wind speed between model and observations at heights...... around 100m are smaller than 3.2% at offshore sites, except for those that are affected by the wake of a wind farm or the coastline. These biases are smaller than those obtained by using winds directly from the reanalysis. We study the sensitivity of the WRF-simulated wind climatology to various model...

  12. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  13. Modeling of high homologous temperature deformation behavior for stress and life-time analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krempl, E. [Rensselaer Polytechnic Institute, Troy, NY (United States)

    1997-12-31

    Stress and lifetime analyses need realistic and accurate constitutive models for the inelastic deformation behavior of engineering alloys at low and high temperatures. Conventional creep and plasticity models have fundamental difficulties in reproducing high homologous temperature behavior. To improve the modeling capabilities {open_quotes}unified{close_quotes} state variable theories were conceived. They consider all inelastic deformation rate-dependent and do not have separate repositories for creep and plasticity. The viscoplasticity theory based on overstress (VBO), one of the unified theories, is introduced and its properties are delineated. At high homologous temperature where secondary and tertiary creep are observed modeling is primarily accomplished by a static recovery term and a softening isotropic stress. At low temperatures creep is merely a manifestation of rate dependence. The primary creep modeled at low homologous temperature is due to the rate dependence of the flow law. The model is unaltered in the transition from low to high temperature except that the softening of the isotropic stress and the influence of the static recovery term increase with an increase of the temperature.

  14. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  15. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  16. Sensitivity studies of unsaturated groundwater flow modeling for groundwater travel time calculations at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Altman, S.J.; Ho, C.K.; Arnold, B.W.; McKenna, S.A.

    1995-12-31

    Unsaturated flow has been modeled through four cross-sections at Yucca Mountain, Nevada, for the purpose of determining groundwater particle travel times from the potential repository to the water table. This work will be combined with the results of flow modeling in the saturated zone for the purpose of evaluating the suitability of the potential repository under the criteria of 10CFR960. One criterion states, in part, that the groundwater travel time (GWTT) from the repository to the accessible environment must exceed 1,000 years along the fastest path of likely and significant radionuclide travel. Sensitivity analyses have been conducted for one geostatistical realization of one cross-section for the purpose of (1) evaluating the importance of hydrological parameters having some uncertainty and (2) examining conceptual models of flow by altering the numerical implementation of the conceptual model (dual permeability (DK) and the equivalent continuum model (ECM). Results of comparisons of the ECM and DK model are also presented in Ho et al.

  17. Sensitivity studies of unsaturated groundwater flow modeling for groundwater travel time calculations at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Altman, S.J.; Ho, C.K.; Arnold, B.W.; McKenna, S.A. [Sandia National Labs., Albuquerque, NM (United States)

    1996-12-01

    Unsaturated flow has been modeled through four cross-sections at Yucca Mountain, Nevada, for the purpose of determining groundwater particle travel times from the potential repository to the water table. This work will be combined with the results of flow modeling in the saturated zone for the purpose of evaluating the suitability of the potential repository under the criteria of 10CFR960. One criterion states, in part, that the groundwater travel time (GWTT) from the repository to the accessible environment must exceed 1,000 years along the fastest path of likely and significant radionuclide travel. Sensitivity analyses have been conducted for one geostatistical realization of one cross-section for the purpose of (1) evaluating the importance of hydrological parameters having some uncertainty (infiltration, fracture-matrix connectivity, fracture frequency, and matrix air entry pressure or van Genuchten {alpha}); and (2) examining conceptual models of flow by altering the numerical implementation of the conceptual model (dual permeability (DK) and the equivalent continuum model (ECM)). Results of comparisons of the ECM and DK model are also presented in Ho et al.

  18. Test and Sensitivity Analysis of Hydrological Modeling in the Coupled WRF-Urban Modeling System

    Science.gov (United States)

    Wang, Z.; yang, J.

    2013-12-01

    Rapid urbanization has emerged as the source of many adverse effects that challenge the environmental sustainability of cities under changing climatic patterns. One essential key to address these challenges is to physically resolve the dynamics of urban-land-atmospheric interactions. To investigate the impact of urbanization on regional climate, physically-based single layer urban canopy model (SLUCM) has been developed and implemented into the Weather Research and Forecasting (WRF) platform. However, due to the lack of realistic representation of urban hydrological processes, simulation of urban climatology by current coupled WRF-SLUCM is inevitably inadequate. Aiming at improving the accuracy of simulations, recently we implemented urban hydrological processes into the model, including (1) anthropogenic latent heat, (2) urban irrigation, (3) evaporation over impervious surface, and (4) urban oasis effect. In addition, we couple the green roof system into the model to verify its capacity in alleviating urban heat island effect at regional scale. Driven by different meteorological forcings, offline tests show that the enhanced model is more accurate in predicting turbulent fluxes arising from built terrains. Though the coupled WRF-SLUCM has been extensively tested against various field measurement datasets, accurate input parameter space needs to be specified for good model performance. As realistic measurements of all input parameters to the modeling framework are rarely possible, understanding the model sensitivity to individual parameters is essential to determine the relative importance of parameter uncertainty to model performance. Thus we further use an advanced Monte Carlo approach to quantify relative sensitivity of input parameters of the hydrological model. In particular, performance of two widely used soil hydraulic models, namely the van Genuchten model (based on generic soil physics) and an empirical model (viz. the CHC model currently adopted in WRF

  19. Computational model of an infant brain subjected to periodic motion simplified modelling and Bayesian sensitivity analysis.

    Science.gov (United States)

    Batterbee, D C; Sims, N D; Becker, W; Worden, K; Rowson, J

    2011-11-01

    Non-accidental head injury in infants, or shaken baby syndrome, is a highly controversial and disputed topic. Biomechanical studies often suggest that shaking alone cannot cause the classical symptoms, yet many medical experts believe the contrary. Researchers have turned to finite element modelling for a more detailed understanding of the interactions between the brain, skull, cerebrospinal fluid (CSF), and surrounding tissues. However, the uncertainties in such models are significant; these can arise from theoretical approximations, lack of information, and inherent variability. Consequently, this study presents an uncertainty analysis of a finite element model of a human head subject to shaking. Although the model geometry was greatly simplified, fluid-structure-interaction techniques were used to model the brain, skull, and CSF using a Eulerian mesh formulation with penalty-based coupling. Uncertainty and sensitivity measurements were obtained using Bayesian sensitivity analysis, which is a technique that is relatively new to the engineering community. Uncertainty in nine different model parameters was investigated for two different shaking excitations: sinusoidal translation only, and sinusoidal translation plus rotation about the base of the head. The level and type of sensitivity in the results was found to be highly dependent on the excitation type.

  20. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA

    2017-05-01

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.

  1. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

    Science.gov (United States)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany; Thabane, Lehana; Gluud, Christian

    2011-12-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four alternative estimators yielded p-values and CIs that could be interpreted as discordant in up to 11.6% or 6% of the included meta-analyses pending whether a normal distribution or a t-distribution of the intervention effect estimates were assumed. Large discrepancies were observed for the measures of degree of heterogeneity when comparing DL with each of the four alternative estimators. Estimating the degree (percentage) of heterogeneity on the basis of less biased between-trial variance estimators seems preferable to current practice. Disclosing inferential sensitivity of p-values and CIs may also be necessary when borderline significant results have substantial impact on the conclusion. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  3. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available.

  4. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  5. A MURINE MODEL FOR LOW MOLECULAR WEIGHT CHEMICALS: DIFFERENTIATION OF RESPIRATORY SENSITIZERS (TMA) FROM CONTACT SENSITIZERS (DNFB)

    Science.gov (United States)

    Exposure to low molecular weight (LMW) chemicals contributes to both dermal and respiratory sensitization and is an important occupational health problem. Our goal was to establish an in vivo murine model for hazard identification of LMW chemicals that have the potential to indu...

  6. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  7. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  8. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  9. IATA-Bayesian Network Model for Skin Sensitization Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Since the publication of the Adverse Outcome Pathway (AOP) for skin sensitization, there have been many efforts to develop systematic approaches to integrate the...

  10. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  11. Evaluation of hydrological models for scenario analyses: signal-to-noise-ratio between scenario effects and model uncertainty

    Directory of Open Access Journals (Sweden)

    H. Bormann

    2005-01-01

    Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.

  12. Electromechanical model of a resonating nano-cantilever-based sensor for high-resolution and high-sensitivity mass detection

    DEFF Research Database (Denmark)

    Abadal, G.; Davis, Zachary James; Helbo, Bjarne;

    2001-01-01

    A simple linear electromechanical model for an electrostatically driven resonating cantilever is derived. The model has been developed in order to determine dynamic quantities such as the capacitive current flowing through the cantilever-driver system at the resonance frequency, and it allows us...... to calculate static magnitudes such as position and voltage of collapse or the voltage versus deflection characteristic. The model is used to demonstrate the theoretical sensitivity on the attogram scale of a mass sensor based on a nanometre-scale cantilever, and to analyse the effect of an extra feedback loop...

  13. Integrative "omic" analysis for tamoxifen sensitivity through cell based models.

    Directory of Open Access Journals (Sweden)

    Liming Weng

    Full Text Available It has long been observed that tamoxifen sensitivity varies among breast cancer patients. Further, ethnic differences of tamoxifen therapy between Caucasian and African American have also been reported. Since most studies have been focused on Caucasian people, we sought to comprehensively evaluate genetic variants related to tamoxifen therapy in African-derived samples. An integrative "omic" approach developed by our group was used to investigate relationships among endoxifen (an active metabolite of tamoxifen sensitivity, SNP genotype, mRNA and microRNA expressions in 58 HapMap YRI lymphoblastoid cell lines. We identified 50 SNPs that associate with cellular sensitivity to endoxifen through their effects on 34 genes and 30 microRNA expression. Some of these findings are shared in both Caucasian and African samples, while others are unique in the African samples. Among gene/microRNA that were identified in both ethnic groups, the expression of TRAF1 is also correlated with tamoxifen sensitivity in a collection of 44 breast cancer cell lines. Further, knock-down TRAF1 and over-expression of hsa-let-7i confirmed the roles of hsa-let-7i and TRAF1 in increasing tamoxifen sensitivity in the ZR-75-1 breast cancer cell line. Our integrative omic analysis facilitated the discovery of pharmacogenomic biomarkers that potentially affect tamoxifen sensitivity.

  14. Sensitivity Analysis of a Spatio-Temporal Avalanche Forecasting Model Based on Support Vector Machines

    Science.gov (United States)

    Matasci, G.; Pozdnoukhov, A.; Kanevski, M.

    2009-04-01

    sensitivity analysis is to shed light on the particular abilities of the model in assessing the likelihood of avalanche releases under evolving meteorological/snowpack conditions. Both spatial resolution (the abilities to produce reliable forecasts for individual avalanche paths) and temporal behaviour of the model are explored in details. Based on the sensitivity analysis, the uncertainty estimation for the provided forecasts is discussed. Particularly, the ensembles of prediction models are run and analysed in order to estimate the variability of the provided forecast and assess the uncertainty coming from the variety of sources: imprecise input data, uncertainty in weather forecast, sub-optimal parameters of the prediction model and variability in the choice of the training dataset.

  15. Sensitivity Analysis and Uncertainty Characterization of Subnational Building Energy Demand in an Integrated Assessment Model

    Science.gov (United States)

    Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.

    2012-12-01

    example, regional stakeholders have identified a need to understand the cost and effectiveness of potential regional policies to upgrade building energy codes and equipment standards to reduce carbon emissions and save energy. This presentation discusses the application and results of fractional factorial analyses and related methods that we have used to determine the sensitivity of key benefits and costs of regional building codes and equipment efficiency standards at the state level, while also reducing the dimensionality of the downstream uncertainty characterization and propagation problem. The presentation analyzes alternative policies for regional building standards in the context of uncertain population and economic growth, carbon scenarios that represent both future atmospheric carbon loading and national emissions polices, and regional climate changes projected by a range of climate models.

  16. A model intercomparison analysing the link between column ozone and geopotential height anomalies in January

    Directory of Open Access Journals (Sweden)

    P. Braesicke

    2008-05-01

    Full Text Available A statistical framework to evaluate the performance of chemistry-climate models with respect to the interaction between meteorology and column ozone during northern hemisphere mid-winter, in particularly January, is used. Different statistical diagnostics from four chemistry-climate models (E39C, ME4C, UMUCAM, ULAQ are compared with the ERA-40 re-analysis. First, we analyse vertical coherence in geopotential height anomalies as described by linear correlations between two different pressure levels (30 and 200 hPa of the atmosphere. In addition, linear correlations between column ozone and geopotential height anomalies at 200 hPa are discussed to motivate a simple picture of the meteorological impacts on column ozone on interannual timescales. Secondly, we discuss characteristic spatial structures in geopotential height and column ozone anomalies as given by their first two empirical orthogonal functions. Finally, we describe the covariance patterns between reconstructed anomalies of geopotential height and column ozone. In general we find good agreement between the models with higher horizontal resolution (E39C, ME4C, UMUCAM and ERA-40. The Pacific-North American (PNA pattern emerges as a useful qualitative benchmark for the model performance. Models with higher horizontal resolution and high upper boundary (ME4C and UMUCAM show good agreement with the PNA tripole derived from ERA-40 data, including the column ozone modulation over the Pacfic sector. The model with lowest horizontal resolution does not show a classic PNA pattern (ULAQ, and the model with the lowest upper boundary (E39C does not capture the PNA related column ozone variations over the Pacific sector. Those discrepancies have to be taken into account when providing confidence intervals for climate change integrations.

  17. Deterministic sensitivity analysis for the numerical simulation of contaminants transport; Analyse de sensibilite deterministe pour la simulation numerique du transfert de contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Marchand, E

    2007-12-15

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  18. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  19. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  20. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems.

    Science.gov (United States)

    Vredenberg, Wim

    2011-02-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with multi-turnover pulses (MTFs) of variable intensity and duration. These analyses have led to definition and formulation of rate equations that describe the sequence of primary linear electron transfer (LET) steps in photosystem II (PSII) and of cyclic electron transport (CET) in PSI. The model considers heterogeneity in PSII reaction centers (RCs) associated with the S-states of the OEC and incorporates in a dark-adapted state the presence of a 15-35% fraction of Q(B)-nonreducing RCs that probably is identical with the S₀ fraction. The fluorescence induction algorithm (FIA) in the 10 μs-1s excitation time range considers a photochemical O-J-D, a photo-electrochemical J-I and an I-P phase reflecting the response of the variable fluorescence to the electric trans-thylakoid potential generated by the proton pump fuelled by CET in PSI. The photochemical phase incorporates the kinetics associated with the double reduction of the acceptor pair of pheophytin (Phe) and plastoquinone Q(A) [PheQ(A)] in Q(B) nonreducing RCs and the associated doubling of the variable fluorescence, in agreement with the three-state trapping model (TSTM) of PS II. The decline in fluorescence emission during the so called SMT in the 1-100s excitation time range, known as the Kautsky curve, is shown to be associated with a substantial decrease of CET-powered proton efflux from the stroma into the chloroplast lumen through the ATPsynthase of the photosynthetic machinery.

  1. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  2. Hierarchical Bass model: a product diffusion model considering a diversity of sensitivity to fashion

    Science.gov (United States)

    Tashiro, Tohru

    2016-11-01

    We propose a new product diffusion model including the number of how many adopters or advertisements a non-adopter met until he/she adopts the product, where (non-)adopters mean people (not) possessing it. By this effect not considered in the Bass model, we can depict a diversity of sensitivity to fashion. As an application, we utilize the model to fit the iPod and the iPhone unit sales data, and so the better agreement is obtained than the Bass model for the iPod data. We also present a new method to estimate the number of advertisements in a society from fitting parameters of the Bass model and this new model.

  3. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  4. Plot-scale testing and sensitivity analysis of Be7 based soil erosion conversion models

    Science.gov (United States)

    Taylor, Alex; Abdelli, Wahid; Barri, Bashar Al; Iurian, Andra; Gaspar, Leticia; Mabit, Lionel; Millward, Geoff; Ryken, Nick; Blake, Will

    2016-04-01

    Over the past 2 decades, a growing number of studies have recognised the potential for short-lived cosmogenic Be-7 (half-life 53 days) to be used as a tracer to evaluate soil erosion from short-term inter-rill erosion to hillslope sediment budgets. While conversion modelling approaches are now established for event-scale and extended-time-series applications, there is a lack of validation and sensitivity analysis to underpin confidence in their use across a full range of agro-climatic zones. This contribution aims to close this gap in the context of the maritime temperate climate of southwest UK. Two plots of 4 x 35 m were ploughed and tilled at the beginning of winter 2013/2014 in southwest UK to create (1) a bare, sloped soil surface and (2) a bare flat reference site. The bounded lower edge of the plot fed into a collection bin for overland flow and associated sediment. The tilled surface had a low bulk density and high permeability at the start of the experiment (ksat > 100 mm/hr). Hence, despite high rainfall in December (200 mm), notable overland flow was observed only after intense rain storms during late 2013 and early January 2014 when the soil profile was saturated i.e. driven by Saturation Overland Flow (SOF). At the time of SOF initiation, ca. 70% of the final Be-7 inventory had been delivered to the site. Subsequent to a series SOF events across a 1 month period, the plot soil surface was intensively sampled to quantify Be-7 inventory patterns and develop a tracer budget. Captured eroded sediment was dried, weighed and analysed for Be-7. All samples were analysed for particle size by laser granulometry. Be-7 inventory data were converted to soil erosion estimates using (1) standard profile distribution model, (2) the extended time series distribution model and (3) a new 'antecedent rainfall' extended time series model to account for lack of soil erosion prior to soil saturation. Results were scaled up to deliver a plot-scale sediment budget to include

  5. Immunomodulatory effects of Lactobacillus casei administration in a mouse model of gliadin-sensitive enteropathy.

    Science.gov (United States)

    D'Arienzo, R; Stefanile, R; Maurano, F; Mazzarella, G; Ricca, E; Troncone, R; Auricchio, S; Rossi, M

    2011-10-01

    Coeliac disease (CD) is a very common food-sensitive enteropathy, which is triggered by gluten ingestion and is mediated by CD4(+) T cells. In addition, alterations in the intestinal microbiota that is normally involved in the homeostasis of GALT (gut-associated lymphoid tissue) seem to play a role in CD. In accordance with these findings, we previously reported that Lactobacillus casei can induce a strong enhancement of the T cell-mediated response to gliadin without inducing enteropathy. In this study, we analysed the effects of L. casei administration in a mouse model of gliadin-induced villous damage that was recently developed and involves the inhibition of cyclo-oxygenase (COX) activities in gliadin-sensitized HLA-DQ8 transgenic mice. To address the issue, we assessed the weight loss, the intestinal cytokine pattern, the density of CD25(+) cells and morphometry of the gut mucosa. We confirmed that COX inhibition in sensitized mice caused villus blunting, dysregulated expression of tumour necrosis factor (TNF)-α and reduced gliadin-specific IL-2 production. Notably, the administration of probiotic strain induced a complete recovery of villus blunting. This finding was associated with a delay in weight decrease and a recovery of basal TNF-α levels, whereas the numbers of CD25(+) cells and the levels of IL-2 remained unchanged. In conclusion, our data suggest that the administration of L. casei can be effective in rescuing the normal mucosal architecture and GALT homeostasis in a mouse model of gliadin-induced enteropathy. © 2011 The Authors. Scandinavian Journal of Immunology © 2011 Blackwell Publishing Ltd.

  6. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  7. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  8. Assessing the hydrodynamic boundary conditions for risk analyses in coastal areas: a stochastic storm surge model

    Directory of Open Access Journals (Sweden)

    T. Wahl

    2011-11-01

    Full Text Available This paper describes a methodology to stochastically simulate a large number of storm surge scenarios (here: 10 million. The applied model is very cheap in computation time and will contribute to improve the overall results from integrated risk analyses in coastal areas. Initially, the observed storm surge events from the tide gauges of Cuxhaven (located in the Elbe estuary and Hörnum (located in the southeast of Sylt Island are parameterised by taking into account 25 parameters (19 sea level parameters and 6 time parameters. Throughout the paper, the total water levels are considered. The astronomical tides are semidiurnal in the investigation area with a tidal range >2 m. The second step of the stochastic simulation consists in fitting parametric distribution functions to the data sets resulting from the parameterisation. The distribution functions are then used to run Monte-Carlo-Simulations. Based on the simulation results, a large number of storm surge scenarios are reconstructed. Parameter interdependencies are considered and different filter functions are applied to avoid inconsistencies. Storm surge scenarios, which are of interest for risk analyses, can easily be extracted from the results.

  9. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  10. Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses

    Science.gov (United States)

    2011-12-30

    REPORT Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses 14. ABSTRACT 16. SECURITY...TERMS AA5083, friction stir welding , Johnson-Cook material model M. Grujicic, B. Pandurangan, C.-F. Yen, B. A. Cheeseman Clemson University Office of...Use in Friction Stir Welding Computational Analyses Report Title ABSTRACT Johnson-Cook strength material model is frequently used in finite-element

  11. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer;

    2014-01-01

    compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...

  12. Sensitivity Analysis of Hydrological Parameters in Modeling Flow and Transport in the Unsaturated Zone of Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    K. Zhang; Y.S. Wu; J.E. Houseworth

    2006-03-21

    The unsaturated fractured volcanic deposits at Yucca Mountain have been intensively investigated as a possible repository site for storing high-level radioactive waste. Field studies at the site have revealed that there exist large variabilities in hydrological parameters over the spatial domain of the mountain. This paper reports on a systematic analysis of hydrological parameters using the site-scale 3-D unsaturated zone (UZ) flow model. The objectives of the sensitivity analyses are to evaluate the effects of uncertainties in hydrologic parameters on modeled UZ flow and contaminant transport results. Sensitivity analyses are carried out relative to fracture and matrix permeability and capillary strength (van Genuchten a), through variation of these parameter values by one standard deviation from the base-case values. The parameter variation results in eight parameter sets. Modeling results for the eight UZ flow sensitivity cases have been compared with field observed data and simulation results from the base-case model. The effects of parameter uncertainties on the flow fields are discussed and evaluated through comparison of results for flow and transport. In general, this study shows that uncertainties in matrix parameters cause larger uncertainty in simulated moisture flux than corresponding uncertainties in fracture properties for unsaturated flow through heterogeneous fractured rock.

  13. Wheat seedlings as a model to understand desiccation tolerance and sensitivity.

    Science.gov (United States)

    Farrant, Jill M.; Bailly, Christophe; Leymarie, Juliette; Hamman, Brigitte; Côme, Daniel; Corbineau, Françoise

    2004-04-01

    The coleoptiles of wheat (Triticum aestivum L.) seedlings of cultivar Trémie are desiccation tolerant when 3 days old, although the roots are not. Cutting some of the coleoptiles open prior to dehydration rapidly increased the drying rate. This rendered the coleoptiles sensitive to desiccation, providing a useful model with which to study desiccation tolerance. Both sensitive and tolerant seedlings were dehydrated to 0.3 g H(2)O g(-1) dry mass (g.g) and thereafter rehydrated. Sensitive tissues accr- ued the lipid peroxidation products H(2)O(2)and MDA, and substantial subcellular damage was evident in dry tissues. H(2)O(2) and MDA accumulated slightly only in dry tolerant coleoptiles and no subcellular damage was evident. The activity of antioxidant enzymes glutathione reductase (EC1.6.2.4), superoxide dismutase (EC 1.14.1.1) and catalase (EC 1.11.1.6) increased on drying in both tolerant and sensitive tissues, but were sustained on rehydration in only the tolerant tissues. It is proposed that free radical damage sustained during rapid drying exceeded the ameliorating capacity of antioxidant systems, allowed accrual of lethal subcellular damage. Slow drying enabled sufficient detoxification by antioxidants to minimize damage and allow tolerance to drying. Three LEA- (p11 and Asp 52) and dehydrin- (XV8) like proteins were detected by western blots in tolerant coleoptiles dried to 3.0 g.g and below. Only one (Asp 52) was induced at low water content in rapidly dried sensitive coleoptiles. None were present in root tissues. XV8 RNA (northern analyses) was induced on drying only in tolerant coleoptiles and correlated with protein expression. These stress-putative protein protectants (and XV8 transcripts) appear to be down-regulated during germination but wheat seedlings temporarily retain the ability to reproduce them if drying is slow. Sucrose accumulation during dehydration was similar for both sensitive and tolerant tissues, suggesting that this sugar has little

  14. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  15. Kinetic modeling and sensitivity analysis of acetone-butanol-ethanol production.

    Science.gov (United States)

    Shinto, Hideaki; Tashiro, Yukihiro; Yamashita, Mayu; Kobayashi, Genta; Sekiguchi, Tatsuya; Hanai, Taizo; Kuriya, Yuki; Okamoto, Masahiro; Sonomoto, Kenji

    2007-08-01

    A kinetic simulation model of metabolic pathways that describes the dynamic behaviors of metabolites in acetone-butanol-ethanol (ABE) production by Clostridium saccharoperbutylacetonicum N1-4 was proposed using a novel simulator WinBEST-KIT. This model was validated by comparing with experimental time-course data of metabolites in batch cultures over a wide range of initial glucose concentrations (36.1-295 mM). By introducing substrate inhibition, product inhibition of butanol, activation of butyrate and considering the cessation of metabolic reactions in the case of insufficiency of energy after glucose exhaustion, the revised model showed 0.901 of squared correlation coefficient (r(2)) between experimental time-course of metabolites and calculated ones. Thus, the final revised model is assumed to be one of the best candidates for kinetic simulation describing dynamic behavior of metabolites in ABE production. Sensitivity analysis revealed that 5% increase in reaction of reverse pathway of butyrate production (R(17)) and 5% decrease in reaction of CoA transferase for butyrate (R(15)) highly contribute to high production of butanol. These system analyses should be effective in the elucidation which pathway is metabolic bottleneck for high production of butanol.

  16. A Probabilistic Model for Sequence Alignment with Context-Sensitive Indels

    Science.gov (United States)

    Hickey, Glenn; Blanchette, Mathieu

    Probabilistic approaches for sequence alignment are usually based on pair Hidden Markov Models (HMMs) or Stochastic Context Free Grammars (SCFGs). Recent studies have shown a significant correlation between the content of short indels and their flanking regions, which by definition cannot be modelled by the above two approaches. In this work, we present a context-sensitive indel model based on a pair Tree-Adjoining Grammar (TAG), along with accompanying algorithms for efficient alignment and parameter estimation. The increased precision and statistical power of this model is shown on simulated and real genomic data. As the cost of sequencing plummets, the usefulness of comparative analysis is becoming limited by alignment accuracy rather than data availability. Our results will therefore have an impact on any type of downstream comparative genomics analyses that rely on alignments. Fine-grained studies of small functional regions or disease markers, for example, could be significantly improved by our method. The implementation is available at http://www.mcb.mcgill.ca/~blanchem/software.html

  17. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  18. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  19. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development.

  20. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  1. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development.

    Science.gov (United States)

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-12-30

    The ever-growing wealth of biological information available through multiple comprehensive database repositories can be leveraged for advanced analysis of data. We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based on existing biological knowledge. Biofilter now has the Library of Knowledge Integration (LOKI), for accessing and integrating existing comprehensive database information, including more flexibility for how ambiguity of gene identifiers are handled. We have also updated the way importance scores for interaction models are generated. In addition, Biofilter 2.0 now works with a range of types and formats of data, including single nucleotide polymorphism (SNP) identifiers, rare variant identifiers, base pair positions, gene symbols, genetic regions, and copy number variant (CNV) location information. Biofilter provides a convenient single interface for accessing multiple publicly available human genetic data sources that have been compiled in the supporting database of LOKI. Information within LOKI includes genomic locations of SNPs and genes, as well as known relationships among genes and proteins such as interaction pairs, pathways and ontological categories.Via Biofilter 2.0 researchers can:• Annotate genomic location or region based data, such as results from association studies, or CNV analyses, with relevant biological knowledge for deeper interpretation• Filter genomic location or region based data on biological criteria, such as filtering a series SNPs to retain only SNPs present in specific genes within specific pathways of interest• Generate Predictive Models for gene-gene, SNP-SNP, or CNV-CNV interactions based on biological information, with priority for models to be tested based on biological relevance, thus narrowing the search space and reducing multiple hypothesis-testing. Biofilter is a software

  2. Bayesian randomized item response modeling for sensitive measurements

    NARCIS (Netherlands)

    Avetisyan, M.

    2012-01-01

    In behavioral, health, and social sciences, any endeavor involving measurement is directed at accurate representation of the latent concept with the manifest observation. However, when sensitive topics, such as substance abuse, tax evasion, or felony, are inquired, substantial distortion of reported

  3. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    . Sensitivity studies showed a clear importance of an accurate parameterisation of within- and between-stand trait variability on the fidelity of model predictions. For example, when functional tree diversity was not included in the model (i.e. with just a single plant functional type with mean basin-wide trait values) the predictive ability of the model was reduced. This was also the case when basin-wide (as opposed to site-specific) trait distributions were applied within each stand. We conclude that models of tropical forest carbon, energy and water cycling should strive to accurately represent observed variations in functionally important traits across the range of relevant scales.

  4. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  5. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  6. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    Science.gov (United States)

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  7. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    Science.gov (United States)

    Grouillet, Benjamin; Ruelland, Denis; Vaittinada Ayar, Pradeebane; Vrac, Mathieu

    2016-03-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20-year period (1986-2005) to capture different climatic conditions in the basins. The daily GR4j conceptual model was used to simulate streamflow that was eventually evaluated at a 10-day time step. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled data sets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the cumulative distribution function-transform approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two general circulation models (GCMs) (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement in runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range

  8. Sensitivity of a Barotropic Ocean Model to Perturbations of the Bottom Topography

    CERN Document Server

    Kazantsev, Eugene

    2008-01-01

    In this paper, we look for an operator that describes the relationship between small errors in representation of the bottom topography in a barotropic ocean model and the model's solution. The study shows that the model's solution is very sensitive to topography perturbations in regions where the flow is turbulent. On the other hand, the flow exhibits low sensitivity in laminar regions. The quantitative measure of sensitivity is influenced essentially by the error growing time. At short time scales, the sensitivity exhibits the polynomial dependence on the error growing time. And in the long time limit, the dependence becomes exponential.

  9. Controls on Yardang Morphology: Insights from Field Measurements, Lidar Topographic Analyses, and Numerical Modeling

    Science.gov (United States)

    Pelletier, J. D.; Kapp, P. A.

    2014-12-01

    Yardangs are streamlined bedforms sculpted by the wind and wind-blown sand. They can form as relatively resistant exposed rocks erode more slowly than surrounding exposed rocks, thus causing the more resistant rocks to stand higher in the landscape and deflect the wind and wind-blown sand into adjacent troughs in a positive feedback. How this feedback gives rise to streamlined forms that locally have a consistent size is not well understood theoretically. In this study we combine field measurements in the yardangs of Ocotillo Wells SVRA with analyses of airborne and terrestrial lidar datasets and numerical modeling to quantify and understand the controls on yardang morphology. The classic model for yardang morphology is that they evolve to an ideal 4:1 length-to-width aspect ratio that minimizes aerodynamic drag. We show using computational fluid dynamics (CFD) modeling that this model is incorrect: the 4:1 aspect ratio is the value corresponding to minimum drag for free bodies, i.e. obstacles around which air flows on all sides. Yardangs, in contrast, are embedded in Earth's surface. For such rough streamlined half-bodies, the aspect ratio corresponding to minimum drag is larger than 20:1. As an alternative to the minimum-drag model, we propose that the aspect ratio of yardangs not significantly influenced by structural controls is controlled by the angle of dispersion of the aerodynamic jet created as deflected wind and wind-blown sand exits the troughs between incipient yardang noses. Aerodynamic jets have a universal dispersion angle of 11.8 degrees, thus predicting a yardang aspect ratio of ~5:1. We developed a landscape evolution model that combines the physics of boundary layer flow with aeolian saltation and bedrock erosion to form yardangs with a range of sizes and aspect ratios similar to those observed in nature. Yardangs with aspect ratios both larger and smaller than 5:1 occur in the model since the strike and dip of the resistant rock unit also exerts

  10. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  11. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA

    2017-04-01

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  12. Multi-Scale Thermohydrologic Model Sensitivity-Study Calculations in Support of the SSPA

    Energy Technology Data Exchange (ETDEWEB)

    Glascoe, L G; Buscheck, T A; Loosmore, G A; Sun, Y

    2001-12-20

    The purpose of this calculation report is to document the thermohydrologic (TH) model calculations performed for the Supplemental Science and Performance Analysis (SSPA), Volume 1, Section 5 and Volume 2 (BSC 2001d [DIRS 155950], BSC 2001e [DIRS 154659]). The calculations are documented here in accordance with AP-3.12Q REV0 ICN4 [DIRS 154418]. The Technical Working Plan (Twp) for this document is TWP-NGRM-MD-000015 Real. These TH calculations were primarily conducted using three model types: (1) the Multiscale Thermohydrologic (MSTH) model, (2) the line-averaged-heat-source, drift-scale thermohydrologic (LDTH) model, and (3) the discrete-heat-source, drift-scale thermal (DDT) model. These TH-model calculations were conducted to improve the implementation of the scientific conceptual model, quantify previously unquantified uncertainties, and evaluate how a lower-temperature operating mode (LTOM) would affect the in-drift TH environment. Simulations for the higher-temperature operating mode (HTOM), which is similar to the base case analyzed for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) (CRWMS M&O 2000j [DIRS 153246]), were also conducted for comparison with the LTOM. This Calculation Report describes (1) the improvements to the MSTH model that were implemented to reduce model uncertainty and to facilitate model validation, and (2) the sensitivity analyses conducted to better understand the influence of parameter and process uncertainty. The METHOD Section (Section 2) describes the improvements to the MSTH-model methodology and submodels. The ASSUMPTIONS Section (Section 3) lists the assumptions made (e.g., boundaries, material properties) for this methodology. The USE OF SOFTWARE Section (Section 4) lists the software, routines and macros used for the MSTH model and submodels supporting the SSPA. The CALCULATION Section (Section 5) lists the data used in the model and the manner in which the MSTH model is prepared and executed. And

  13. A Hidden Markov model web application for analysing bacterial genomotyping DNA microarray experiments.

    Science.gov (United States)

    Newton, Richard; Hinds, Jason; Wernisch, Lorenz

    2006-01-01

    Whole genome DNA microarray genomotyping experiments compare the gene content of different species or strains of bacteria. A statistical approach to analysing the results of these experiments was developed, based on a Hidden Markov model (HMM), which takes adjacency of genes along the genome into account when calling genes present or absent. The model was implemented in the statistical language R and applied to three datasets. The method is numerically stable with good convergence properties. Error rates are reduced compared with approaches that ignore spatial information. Moreover, the HMM circumvents a problem encountered in a conventional analysis: determining the cut-off value to use to classify a gene as absent. An Apache Struts web interface for the R script was created for the benefit of users unfamiliar with R. The application may be found at http://hmmgd.cryst.bbk.ac.uk/hmmgd. The source code illustrating how to run R scripts from an Apache Struts-based web application is available from the corresponding author on request. The application is also available for local installation if required.

  14. An efficient finite-difference strategy for sensitivity analysis of stochastic models of biochemical systems.

    Science.gov (United States)

    Morshed, Monjur; Ingalls, Brian; Ilie, Silvana

    2017-01-01

    Sensitivity analysis characterizes the dependence of a model's behaviour on system parameters. It is a critical tool in the formulation, characterization, and verification of models of biochemical reaction networks, for which confident estimates of parameter values are often lacking. In this paper, we propose a novel method for sensitivity analysis of discrete stochastic models of biochemical reaction systems whose dynamics occur over a range of timescales. This method combines finite-difference approximations and adaptive tau-leaping strategies to efficiently estimate parametric sensitivities for stiff stochastic biochemical kinetics models, with negligible loss in accuracy compared with previously published approaches. We analyze several models of interest to illustrate the advantages of our method.

  15. Loss Performance Modeling for Hierarchical Heterogeneous Wireless Networks With Speed-Sensitive Call Admission Control

    DEFF Research Database (Denmark)

    Huang, Qian; Huang, Yue-Cai; Ko, King-Tim;

    2011-01-01

    dimensioning and planning. This paper investigates the computationally efficient loss performance modeling for multiservice in hierarchical heterogeneous wireless networks. A speed-sensitive call admission control (CAC) scheme is considered in our model to assign overflowed calls to appropriate tiers...

  16. Benchmarking sensitivity of biophysical processes to leaf area changes in land surface models

    Science.gov (United States)

    Forzieri, Giovanni; Duveiller, Gregory; Georgievski, Goran; Li, Wei; Robestson, Eddy; Kautz, Markus; Lawrence, Peter; Ciais, Philippe; Pongratz, Julia; Sitch, Stephen; Wiltshire, Andy; Arneth, Almut; Cescatti, Alessandro

    2017-04-01

    Land surface models (LSM) are widely applied as supporting tools for policy-relevant assessment of climate change and its impact on terrestrial ecosystems, yet knowledge of their performance skills in representing the sensitivity of biophysical processes to changes in vegetation density is still limited. This is particularly relevant in light of the substantial impacts on regional climate associated with the changes in leaf area index (LAI) following the observed global greening. Benchmarking LSMs on the sensitivity of the simulated processes to vegetation density is essential to reduce their uncertainty and improve the representation of these effects. Here we present a novel benchmark system to assess model capacity in reproducing land surface-atmosphere energy exchanges modulated by vegetation density. Through a collaborative effort of different modeling groups, a consistent set of land surface energy fluxes and LAI dynamics has been generated from multiple LSMs, including JSBACH, JULES, ORCHIDEE, CLM4.5 and LPJ-GUESS. Relationships of interannual variations of modeled surface fluxes to LAI changes have been analyzed at global scale across different climatological gradients and compared with satellite-based products. A set of scoring metrics has been used to assess the overall model performances and a detailed analysis in the climate space has been provided to diagnose possible model errors associated to background conditions. Results have enabled us to identify model-specific strengths and deficiencies. An overall best performing model does not emerge from the analyses. However, the comparison with other models that work better under certain metrics and conditions indicates that improvements are expected to be potentially achievable. A general amplification of the biophysical processes mediated by vegetation is found across the different land surface schemes. Grasslands are characterized by an underestimated year-to-year variability of LAI in cold climates

  17. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  18. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  19. The Constructive Marginal of "Moby-Dick": Ishmael and the Developmental Model of Intercultural Sensitivity

    Science.gov (United States)

    Morgan, Jeff

    2011-01-01

    Cultural sensitivity theory is the study of how individuals relate to cultural difference. Using literature to help students prepare for study abroad, instructors could analyze character and trace behavior through a model of cultural sensitivity. Milton J. Bennett has developed such an instrument, The Developmental Model of Intercultural…

  20. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  1. Modelling the climate sensitivity of Storbreen and Engabreen, Norway

    NARCIS (Netherlands)

    Andreassen, L.M.; Elvehøy, H.; Jóhannesson, T.; Oerlemans, J.|info:eu-repo/dai/nl/06833656X; Beldring, S.; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643

    2006-01-01

    In this report we have modelled the mass balance of two Norwegian glaciers using two different approaches. At Storbreen, a continental glacier in southern Norway, a simplified energy balance model was used. At Engabreen, a maritime glacier in northern Norway, a degree day model was used. Both

  2. Global isoprene emissions estimated using MEGAN, ECMWF analyses and a detailed canopy environment model

    Directory of Open Access Journals (Sweden)

    J.-F. Müller

    2008-03-01

    Full Text Available The global emissions of isoprene are calculated at 0.5° resolution for each year between 1995 and 2006, based on the MEGAN (Model of Emissions of Gases and Aerosols from Nature version 2 model (Guenther et al., 2006 and a detailed multi-layer canopy environment model for the calculation of leaf temperature and visible radiation fluxes. The calculation is driven by meteorological fields – air temperature, cloud cover, downward solar irradiance, windspeed, volumetric soil moisture in 4 soil layers – provided by analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF. The estimated annual global isoprene emission ranges between 374 Tg (in 1996 and 449 Tg (in 1998 and 2005, for an average of ca. 410 Tg/year over the whole period, i.e. about 30% less than the standard MEGAN estimate (Guenther et al., 2006. This difference is due, to a large extent, to the impact of the soil moisture stress factor, which is found here to decrease the global emissions by more than 20%. In qualitative agreement with past studies, high annual emissions are found to be generally associated with El Niño events. The emission inventory is evaluated against flux measurement campaigns at Harvard forest (Massachussets and Tapajós in Amazonia, showing that the model can capture quite well the short-term variability of emissions, but that it fails to reproduce the observed seasonal variation at the tropical rainforest site, with largely overestimated wet season fluxes. The comparison of the HCHO vertical columns calculated by a chemistry and transport model (CTM with HCHO distributions retrieved from space provides useful insights on tropical isoprene emissions. For example, the relatively low emissions calculated over Western Amazonia (compared to the corresponding estimates in the inventory of Guenther et al., 1995 are validated by the excellent agreement found between the CTM and HCHO data over this region. The parameterized impact of the soil moisture

  3. Sensitivity-Based Modeling of Evaluating Surface Runoff and Sediment Load using Digital and Analog Mechanisms

    Directory of Open Access Journals (Sweden)

    Olotu Yahaya

    2014-07-01

    Full Text Available Analyses of runoff- sediment measurement and evaluation using automated and convectional runoff-meters was carried out at Meteorological and Hydrological Station of Auchi Polytechnic, Auchi using two runoff plots (ABCDa and EFGHm of area 2m 2 each, depth 0.26 m and driven into the soil to the depth of 0.13m. Runoff depths and intensities were measured from each of the positioned runoff plot. Automated runoff-meter has a measuring accuracy of ±0.001l/±0.025 mm and rainfall depth-intensity was measured using tipping-bucket rainguage during the period of 14-month of experimentation. Minimum and maximum rainfall depths of 1.2 and 190.3 mm correspond to measured runoff depths (MRo of 0.0 mm for both measurement approaches and 60.4 mm and 48.9 mm respectively. Automated runoffmeter provides precise, accurate and instantaneous result over the convectional measurement of surface runoff. Runoff measuring accuracy for automated runoff-meter from the plot (ABCDa produces R 2 = 0.99; while R 2 = 0.96 for manual evaluation in plot (EFGHm. WEPP and SWAT models were used to simulate the obtained hydrological variables from the applied measurement mechanisms. The outputs of sensitivity simulation analysis indicate that data from automated measuring systems gives a better modelling index and such could be used for running robust runoff-sediment predictive modelling technique under different reservoir sedimentation and water management scenarios.

  4. A sensitivity analysis using different spatial resolution terrain models and flood inundation models

    Science.gov (United States)

    Papaioannou, George; Aronica, Giuseppe T.; Loukas, Athanasios; Vasiliades, Lampros

    2014-05-01

    The impact of terrain spatial resolution and accuracy on the hydraulic flood modeling can pervade the water depth and the flood extent accuracy. Another significant factor that can affect the hydraulic flood modeling outputs is the selection of the hydrodynamic models (1D,2D,1D/2D). Human mortality, ravaged infrastructures and other damages can be derived by extreme flash flood events that can be prevailed in lowlands at suburban and urban areas. These incidents make the necessity of a detailed description of the terrain and the use of advanced hydraulic models essential for the accurate spatial distribution of the flooded areas. In this study, a sensitivity analysis undertaken using different spatial resolution of Digital Elevation Models (DEMs) and several hydraulic modeling approaches (1D, 2D, 1D/2D) including their effect on the results of river flow modeling and mapping of floodplain. Three digital terrain models (DTMs) were generated from the different elevation variation sources: Terrestrial Laser Scanning (TLS) point cloud data, classic land surveying and digitization of elevation contours from 1:5000 scale topographic maps. HEC-RAS and MIKE 11 are the 1-dimensional hydraulic models that are used. MLFP-2D (Aronica et al., 1998) and MIKE 21 are the 2-dimensional hydraulic models. The last case consist of the integration of MIKE 11/MIKE 21 where 1D-MIKE 11 and 2D-MIKE 21 hydraulic models are coupled through the MIKE FLOOD platform. The validation process of water depths and flood extent is achieved through historical flood records. Observed flood inundation areas in terms of simulated maximum water depth and flood extent were used for the validity of each application result. The methodology has been applied in the suburban section of Xerias river at Volos-Greece. Each dataset has been used to create a flood inundation map for different cross-section configurations using different hydraulic models. The comparison of resulting flood inundation maps indicates

  5. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  6. Finite element modelling of squirrel, guinea pig and rat skulls: using geometric morphometrics to assess sensitivity.

    Science.gov (United States)

    Cox, P G; Fagan, M J; Rayfield, E J; Jeffery, N

    2011-12-01

    Rodents are defined by a uniquely specialized dentition and a highly complex arrangement of jaw-closing muscles. Finite element analysis (FEA) is an ideal technique to investigate the biomechanical implications of these specializations, but it is essential to understand fully the degree of influence of the different input parameters of the FE model to have confidence in the model's predictions. This study evaluates the sensitivity of FE models of rodent crania to elastic properties of the materials, loading direction, and the location and orientation of the models' constraints. Three FE models were constructed of squirrel, guinea pig and rat skulls. Each was loaded to simulate biting on the incisors, and the first and the third molars, with the angle of the incisal bite varied over a range of 45°. The Young's moduli of the bone and teeth components were varied between limits defined by findings from our own and previously published tests of material properties. Geometric morphometrics (GMM) was used to analyse the resulting skull deformations. Bone stiffness was found to have the strongest influence on the results in all three rodents, followed by bite position, and then bite angle and muscle orientation. Tooth material properties were shown to have little effect on the deformation of the skull. The effect of bite position varied between species, with the mesiodistal position of the biting tooth being most important in squirrels and guinea pigs, whereas bilateral vs. unilateral biting had the greatest influence in rats. A GMM analysis of isolated incisor deformations showed that, for all rodents, bite angle is the most important parameter, followed by elastic properties of the tooth. The results here elucidate which input parameters are most important when defining the FE models, but also provide interesting glimpses of the biomechanical differences between the three skulls, which will be fully explored in future publications. © 2011 The Authors. Journal of

  7. An approach to measure parameter sensitivity in watershed hydrologic modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...

  8. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    Science.gov (United States)

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  9. NEW ANTIMICROBIAL SENSITIVITY TESTS OF BIOFILM OF STREPTOCOCCUS MUTANS IN ARTIFICIAL MOUTH MODEL

    Institute of Scientific and Technical Information of China (English)

    李鸣宇; 汪俊; 刘正; 朱彩莲

    2004-01-01

    Objective To develop a new antimicrobial sensitivity test model for oral products in vitro.Methods A biofilm artificial mouth model for antimicrobial sensitivity tests was established by modifying the LKI chromatography chamber. Using sodium fluoride and Tea polyphenol as antimicrobial agent and Streptococcus mutans as target, sensitivity tests were studied. Results The modeling biofilm assay resulted in a MIC of 1.28mg/ml for fluoride against S. mutans, which was 32 times the MIC for broth maco-dilution method. The differential resistance of bacteria bioflim to antimicrobial agent relative to planktonic cells was also demonstrated. Conclusion The biofilm artificial mouth model may be useful in oral products test.

  10. Modelling survival: exposure pattern, species sensitivity and uncertainty

    NARCIS (Netherlands)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Brink, Van Den Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.

    2016-01-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability

  11. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... primary task of any modern control design is to construct and identify a model ... In this case the problem can be solved if the influence of the parameters ..... the main concept of the enzyme reactions in the UCT model is Sads ...

  12. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  13. Controls on inorganic nitrogen leaching from Finnish catchments assessed using a sensitivity and uncertainty analysis of the INCA-N model

    Energy Technology Data Exchange (ETDEWEB)

    Rankinen, K.; Granlund, K. [Finnish Environmental Inst., Helsinki (Finland); Futter, M. N. [Swedish Univ. of Agricultural Sciences, Uppsala (Sweden)

    2013-11-01

    The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small. (orig.)

  14. Integrated proteomic and N-glycoproteomic analyses of doxorubicin sensitive and resistant ovarian cancer cells reveal glycoprotein alteration in protein abundance and glycosylation.

    Science.gov (United States)

    Ji, Yanlong; Wei, Shasha; Hou, Junjie; Zhang, Chengqian; Xue, Peng; Wang, Jifeng; Chen, Xiulan; Guo, Xiaojing; Yang, Fuquan

    2017-01-06

    Ovarian cancer is one of the most common cancer among women in the world, and chemotherapy remains the principal treatment for patients. However, drug resistance is a major obstacle to the effective treatment of ovarian cancers and the underlying mechanism is not clear. An increased understanding of the mechanisms that underline the pathogenesis of drug resistance is therefore needed to develop novel therapeutics and diagnostic. Herein, we report the comparative analysis of the doxorubicin sensitive OVCAR8 cells and its doxorubicin-resistant variant NCI/ADR-RES cells using integrated global proteomics and N-glycoproteomics. A total of 1525 unique N-glycosite-containing peptides from 740 N-glycoproteins were identified and quantified, of which 253 N-glycosite-containing peptides showed significant change in the NCI/ADR-RES cells. Meanwhile, stable isotope labeling by amino acids in cell culture (SILAC) based comparative proteomic analysis of the two ovarian cancer cells led to the quantification of 5509 proteins. As about 50% of the identified N-glycoproteins are low-abundance membrane proteins, only 44% of quantified unique N-glycosite-containing peptides had corresponding protein expression ratios. The comparison and calibration of the N-glycoproteome versus the proteome classified 14 change patterns of N-glycosite-containing peptides, including 8 up-regulated N-glycosite-containing peptides with the increased glycosylation sites occupancy, 35 up-regulated N-glycosite-containing peptides with the unchanged glycosylation sites occupancy, 2 down-regulated N-glycosite-containing peptides with the decreased glycosylation sites occupancy, 46 down-regulated N-glycosite-containing peptides with the unchanged glycosylation sites occupancy. Integrated proteomic and N-glycoproteomic analyses provide new insights, which can help to unravel the relationship of N-glycosylation and multidrug resistance (MDR), understand the mechanism of MDR, and discover the new diagnostic and

  15. Ocean acidification over the next three centuries using a simple global climate carbon-cycle model: projections and sensitivities

    Science.gov (United States)

    Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit; Mundra, Anupriya

    2016-08-01

    Continued oceanic uptake of anthropogenic CO2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representative Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H+] are most sensitive to parameters that directly affect atmospheric CO2 concentrations - Q10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in ΩAr saturation levels are sensitive to changes in ocean salinity and Q10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.

  16. Hindcasting to measure ice sheet model sensitivity to initial states

    Directory of Open Access Journals (Sweden)

    A. Aschwanden

    2013-07-01

    Full Text Available Validation is a critical component of model development, yet notoriously challenging in ice sheet modeling. Here we evaluate how an ice sheet system model responds to a given forcing. We show that hindcasting, i.e. forcing a model with known or closely estimated inputs for past events to see how well the output matches observations, is a viable method of assessing model performance. By simulating the recent past of Greenland, and comparing to observations of ice thickness, ice discharge, surface speeds, mass loss and surface elevation changes for validation, we find that the short term model response is strongly influenced by the initial state. We show that the thermal and dynamical states (i.e. the distribution of internal energy and momentum can be misrepresented despite a good agreement with some observations, stressing the importance of using multiple observations. In particular we identify rates of change of spatially dense observations as preferred validation metrics. Hindcasting enables a qualitative assessment of model performance relative to observed rates of change. It thereby reduces the number of admissible initial states more rigorously than validation efforts that do not take advantage of observed rates of change.

  17. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Science.gov (United States)

    Bastidas, Luis A.; Knighton, James; Kline, Shaun W.

    2016-09-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  18. Physiologically based pharmacokinetic modeling of a homologous series of barbiturates in the rat: a sensitivity analysis.

    Science.gov (United States)

    Nestorov, I A; Aarons, L J; Rowland, M

    1997-08-01

    Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall

  19. Construct Validity and Reliability of the Adult Rejection Sensitivity Questionnaire: A Comparison of Three Factor Models

    Directory of Open Access Journals (Sweden)

    Marco Innamorati

    2014-01-01

    Full Text Available Objectives and Methods. The aim of the study was to investigate the construct validity of the ARSQ. Methods. The ARSQ and self-report measures of depression, anxiety, and hopelessness were administered to 774 Italian adults, aged 18 to 64 years. Results. Structural equation modeling indicated that the factor structure of the ARSQ can be represented by a bifactor model: a general rejection sensitivity factor and two group factors, expectancy of rejection and rejection anxiety. Reliability of observed scores was not satisfactory: only 44% of variance in observed total scores was due to the common factors. The analyses also indicated different correlates for the general factor and the group factors. Limitations. We administered an Italian version of the ARSQ to a nonclinical sample of adults, so that studies which use clinical populations or the original version of the ARSQ could obtain different results from those presented here. Conclusion. Our results suggest that the construct validity of the ARSQ is disputable and that rejection anxiety and expectancy could bias individuals to readily perceive and strongly react to cues of rejection in different ways.

  20. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  1. Combined calibration and sensitivity analysis for a water quality model of the Biebrza River, Poland

    NARCIS (Netherlands)

    Perk, van der M.; Bierkens, M.F.P.

    1995-01-01

    A study was performed to quantify the error in results of a water quality model of the Biebrza River, Poland, due to uncertainties in calibrated model parameters. The procedure used in this study combines calibration and sensitivity analysis. Finally,the model was validated to test the model capabil

  2. Stochastic uncertainties and sensitivities of a regional-scale transport model of nitrate in groundwater

    NARCIS (Netherlands)

    Brink, C.v.d.; Zaadnoordijk, W.J.; Burgers, S.; Griffioen, J.

    2008-01-01

    Groundwater quality management relies more and more on models in recent years. These models are used to predict the risk of groundwater contamination for various land uses. This paper presents an assessment of uncertainties and sensitivities to input parameters for a regional model. The model had

  3. Maternal sensitivity and language in early childhood: a test of the transactional model.

    Science.gov (United States)

    Leigh, Patricia; Nievar, M Angela; Nathans, Laura

    2011-08-01

    This study examined the relation between mothers' sensitive responsiveness to their children and the children's expressive language skills during early childhood. Reciprocal effects were tested with dyads of mothers and their children participating in the National Institute of Health and Human Development Study of Early Child Care and Youth Development. Sensitive maternal interactions positively affected children's later expressive language in the second and third years of life. Although maternal sensitivity predicted later language skills in children, children's language did not affect later maternal sensitivity as indicated in a structural equation model. These results do not support the 1975 transactional model of child development of Sameroff and Chandler. A consistent pattern of sensitivity throughout infancy and early childhood indicates the importance of fostering maternal sensitivity in infancy for prevention or remediation of expressive language problems in young children.

  4. Virtual patients and sensitivity analysis of the Guyton model of blood pressure regulation: towards individualized models of whole-body physiology.

    Directory of Open Access Journals (Sweden)

    Robert Moss

    Full Text Available Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a "virtual population" from which "virtual individuals" can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the "virtual individuals" that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models.

  5. Virtual patients and sensitivity analysis of the Guyton model of blood pressure regulation: towards individualized models of whole-body physiology.

    Science.gov (United States)

    Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S Randall

    2012-01-01

    Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a "virtual population" from which "virtual individuals" can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the "virtual individuals" that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models.

  6. GCR Environmental Models I: Sensitivity Analysis for GCR Environments

    Science.gov (United States)

    Slaba, Tony C.; Blattnig, Steve R.

    2014-01-01

    Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.

  7. Improving model fidelity and sensitivity for complex systems through empirical information theory

    Science.gov (United States)

    Majda, Andrew J.; Gershgorin, Boris

    2011-01-01

    In many situations in contemporary science and engineering, the analysis and prediction of crucial phenomena occur often through complex dynamical equations that have significant model errors compared with the true signal in nature. Here, a systematic information theoretic framework is developed to improve model fidelity and sensitivity for complex systems including perturbation formulas and multimodel ensembles that can be utilized to improve both aspects of model error simultaneously. A suite of unambiguous test models is utilized to demonstrate facets of the proposed framework. These results include simple examples of imperfect models with perfect equilibrium statistical fidelity where there are intrinsic natural barriers to improving imperfect model sensitivity. Linear stochastic models with multiple spatiotemporal scales are utilized to demonstrate this information theoretic approach to equilibrium sensitivity, the role of increasing spatial resolution in the information metric for model error, and the ability of imperfect models to capture the true sensitivity. Finally, an instructive statistically nonlinear model with many degrees of freedom, mimicking the observed non-Gaussian statistical behavior of tracers in the atmosphere, with corresponding imperfect eddy-diffusivity parameterization models are utilized here. They demonstrate the important role of additional stochastic forcing of imperfect models in order to systematically improve the information theoretic measures of fidelity and sensitivity developed here. PMID:21646534

  8. Global Sensitivity Analysis for Multiple Scenarios and Models of Nitrogen Processes

    Science.gov (United States)

    Chen, Z.; Shi, L.; Ye, M.

    2015-12-01

    Modeling nitrogen process in soil is a long-lasting challenge partly because of the uncertainties from parameters, models and scenarios. It may be difficult to identify a suitable model and its corresponding parameters.This study assesses the global sensitivity indices for parameters of multiple models and scenarios on nitrogen processes. The majority of existing nitrogen dynamics models consider nitrification and denitrification as a first-order decay process or a Michaelis-Menten model, while various reduction functions are used to reflect the impact of environmental soil conditions. To determine the model uncertainty, 9 alternative models were designed based on NP2D model in this study. These