Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Wolf, J.
2002-01-01
To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model
Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.
Energy Technology Data Exchange (ETDEWEB)
Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul
2014-09-01
directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).
Directory of Open Access Journals (Sweden)
Zachary Subin
2012-02-01
Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical
Sensitivity analyses of a global flood model in different geoclimatic regions
Moylan, C.; Neal, J. C.; Freer, J. E.; Pianosi, F.; Wagener, T.; Sampson, C. C.; Smith, A.
2017-12-01
Flood models producing global hazard maps now exist, although with significant variation in the modelled hazard extent. Besides explicit structural differences, reasons for this variation is unknown. Understanding the behaviour of these global flood models is necessary to determine how they can be further developed. Preliminary sensitivity analysis was performed using Morris method on the Bristol global flood model, which has 37 parameters, required to translate the remotely sensed data into input for the underlying hydrodynamic model. This number of parameters implies an excess of complexity for flood modelling and should ideally be mitigated. The analysis showed an order of magnitude difference in parameter sensitivities, when comparing total flooded extent. It also showed the most important parameters' influence to be highly interactive rather than just direct; there were surprises in expectation of which parameters are the most important. Despite these findings, conclusions about the model are limited due to the fixed geoclimatic features of the location analysed. Hence more locations with varied geoclimatic characteristics must be chosen, so the consistencies and deviations of parameter sensitivities across these features become quantifiable. Locations are selected using a novel sampling technique, which aggregates the input data of a domain into representative metrics of the geoclimatic features, hypothesised to correlate with one or more parameters. Combinations of these metrics are sampled across a range of geoclimatic areas, and the sensitivities found are correlated with the sampled metrics. From this work, we find the main influences on flood risk prediction at the global scale for the used model structure, which as a methodology is transferable to the other global flood models.
Uncertainty and Sensitivity Analyses Plan
International Nuclear Information System (INIS)
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project
Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean
2013-04-01
Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on
Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models
International Nuclear Information System (INIS)
Bandrowski, M.S.
1988-01-01
The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects
Hall, Carlton Raden
A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf
Hermann, Albert J.; Stabeno, Phyllis J.; Haidvogel, Dale B.; Musgrave, David L.
2002-12-01
A regional eddy-resolving primitive equation circulation model was used to simulate circulation on the southeastern Bering Sea (SEBS) shelf and basin. This model resolves the dominant observed mean currents, eddies and meanders in the region, and simultaneously includes both tidal and subtidal dynamics. Circulation, temperature, and salinity fields for years 1995 and 1997 were hindcast, using daily wind and buoyancy flux estimates, and tidal forcing derived from a global model. This paper describes the development of the regional model, a comparison of model results with available Eulerian and Lagrangian data, a comparison of results between the two hindcast years, and a sensitivity analysis. Based on these hindcasts and sensitivity analyses, we suggest the following: (1) The Bering Slope Current is a primary source of large ( ˜100 km diameter) eddies in the SEBS basin. Smaller meanders are also formed along the 100 m isobath on the southeastern shelf, and along the 200-m isobath near the shelf break. (2) There is substantial interannual variability in the statistics of eddies within the basin, driven by variability in the strength of the ANSC. (3) The mean flow on the shelf is not strongly sensitive to changes in the imposed strength of the ANSC; rather, it is strongly sensitive to the local wind forcing. (4) Vertical mixing in the SEBS is strongly affected by both tidal and subtidal dynamics. Strongest mixing in the SEBS may in fact occur between the 100- and 400-m isobaths, near the Pribilof Islands, and in Unimak Pass.
Energy Technology Data Exchange (ETDEWEB)
Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis
2016-09-01
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.
Tests of methods and software for set-valued model calibration and sensitivity analyses
Janssen PHM; Sanders R; CWM
1995-01-01
Testen worden besproken die zijn uitgevoerd op methoden en software voor calibratie middels 'rotated-random-scanning', en voor gevoeligheidsanalyse op basis van de 'dominant direction analysis' en de 'generalized sensitivity analysis'. Deze technieken werden
Response surfaces and sensitivity analyses for an environmental model of dose calculations
International Nuclear Information System (INIS)
Iooss, Bertrand; Van Dorpe, Francois; Devictor, Nicolas
2006-01-01
A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk
Sensitivity in risk analyses with uncertain numbers.
Energy Technology Data Exchange (ETDEWEB)
Tucker, W. Troy; Ferson, Scott
2006-06-01
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.
Directory of Open Access Journals (Sweden)
Ilona Naujokaitis-Lewis
2016-07-01
Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat
Snyder, Ian Michael; The ATLAS collaboration
2018-01-01
The sensitivity of the searches for the direct pair production of stops often has been evaluated in simple SUSY scenarios, where only a limited set of supersymmetric particles take part to the stop decay. In this talk, the interpretations of the analyses requiring zero, one or two leptons in the final states to simple but well motivated MSSM scenarios will be discussed.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-07-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
DEFF Research Database (Denmark)
Boiocchi, Riccardo; Gernaey, Krist; Sin, Gürkan
2017-01-01
In the present work, sensitivity analyses are performed on a plant-wide model incorporating the typical treatment unit of a full-scale wastewater treatment plant and N2O production and emission dynamics. The influence of operating temperatureis investigated. The results are exploited to identify...
Directory of Open Access Journals (Sweden)
Weadick Cameron J
2012-10-01
Full Text Available Abstract Background Gene duplications play an important role in the evolution of functional protein diversity. Some models of duplicate gene evolution predict complex forms of paralog divergence; orthologous proteins may diverge as well, further complicating patterns of divergence among and within gene families. Consequently, studying the link between protein sequence evolution and duplication requires the use of flexible substitution models that can accommodate multiple shifts in selection across a phylogeny. Here, we employed a variety of codon substitution models, primarily Clade models, to explore how selective constraint evolved following the duplication of a green-sensitive (RH2a visual pigment protein (opsin in African cichlids. Past studies have linked opsin divergence to ecological and sexual divergence within the African cichlid adaptive radiation. Furthermore, biochemical and regulatory differences between the RH2aα and RH2aβ paralogs have been documented. It thus seems likely that selection varies in complex ways throughout this gene family. Results Clade model analysis of African cichlid RH2a opsins revealed a large increase in the nonsynonymous-to-synonymous substitution rate ratio (ω following the duplication, as well as an even larger increase, one consistent with positive selection, for Lake Tanganyikan cichlid RH2aβ opsins. Analysis using the popular Branch-site models, by contrast, revealed no such alteration of constraint. Several amino acid sites known to influence spectral and non-spectral aspects of opsin biochemistry were found to be evolving divergently, suggesting that orthologous RH2a opsins may vary in terms of spectral sensitivity and response kinetics. Divergence appears to be occurring despite intronic gene conversion among the tandemly-arranged duplicates. Conclusions Our findings indicate that variation in selective constraint is associated with both gene duplication and divergence among orthologs in African
Directory of Open Access Journals (Sweden)
Benjamin L. Turner
2016-10-01
Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per
Energy Technology Data Exchange (ETDEWEB)
Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)
2013-02-15
The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a
Energy Technology Data Exchange (ETDEWEB)
Petelet, M
2008-07-01
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Energy Technology Data Exchange (ETDEWEB)
Petelet, M
2007-10-15
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES
Energy Technology Data Exchange (ETDEWEB)
Flach, G.
2014-10-28
PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.
CSIR Research Space (South Africa)
Nickless, A
2014-05-01
Full Text Available This is the second part of a two-part paper considering network design based on a Lagrangian stochastic particle dispersion model (LPDM), aimed at reducing the uncertainty of the flux estimates achievable for the region of interest by the continuous...
Directory of Open Access Journals (Sweden)
R. C. Basso
Full Text Available Abstract The goals of this work were to present original liquid-liquid equilibrium data of the system containing glycerol + ethanol + ethyl biodiesel from fodder radish oil, including the individual distribution of each ethyl ester; to adjust binary parameters of the NRTL; to compare NRTL and UNIFAC-Dortmund in the LLE representation of the system containing glycerol; to simulate different mixer/settler flowsheets for biodiesel purification, evaluating the ratio water/biodiesel used. In thermodynamic modeling, the deviations between experimental data and calculated values were 0.97% and 3.6%, respectively, using NRTL and UNIFAC-Dortmund. After transesterification, with 3 moles of excess ethanol, removal of this component until a content equal to 0.08 before an ideal settling step allows a glycerol content lower than 0.02% in the ester-rich phase. Removal of ethanol, glycerol and water from biodiesel can be performed with countercurrent mixer/settler, using 0.27% of water in relation to the ester amount in the feed stream.
Balancing data sharing requirements for analyses with data sensitivity
Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.
2007-01-01
Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.
Energy Technology Data Exchange (ETDEWEB)
Riley, W.J.; Still, C.J.; Torn, M.S.; Berry, J.A.
2002-01-01
The concentration of 18O in atmospheric CO2 and H2O is a potentially powerful tracer of ecosystem carbon and water fluxes. In this paper we describe the development of an isotope model (ISOLSM) that simulates the 18O content of canopy water vapor, leaf water, and vertically resolved soil water; leaf photosynthetic 18OC16O (hereafter C18OO) fluxes; CO2 oxygen isotope exchanges with soil and leaf water; soil CO2 and C18OO diffusive fluxes (including abiotic soil exchange); and ecosystem exchange of H218O and C18OO with the atmosphere. The isotope model is integrated into the land surface model LSM, but coupling with other models should be straightforward. We describe ISOLSM and apply it to evaluate (a) simplified methods of predicting the C18OO soil-surface flux; (b) the impacts on the C18OO soil-surface flux of the soil-gas diffusion coefficient formulation, soil CO2 source distribution, and rooting distribution; (c) the impacts on the C18OO fluxes of carbonic anhydrase (CA) activity in soil and leaves; and (d) the sensitivity of model predictions to the d18O value of atmospheric water vapor and CO2. Previously published simplified models are unable to capture the seasonal and diurnal variations in the C18OO soil-surface fluxes simulated by ISOLSM. Differences in the assumed soil CO2 production and rooting depth profiles, carbonic anhydrase activity in soil and leaves, and the d18O value of atmospheric water vapor have substantial impacts on the ecosystem CO2 flux isotopic composition. We conclude that accurate prediction of C18OO ecosystem fluxes requires careful representation of H218O and C18OO exchanges and transport in soils and plants.
Sensitivity analyses on in-vessel hydrogen generation for KNGR
International Nuclear Information System (INIS)
Kim, See Darl; Park, S.Y.; Park, S.H.; Park, J.H.
2001-03-01
Sensitivity analyses for the in-vessel hydrogen generation, using the MELCOR program, are described in this report for the Korean Next Generation Reactor. The typical accident sequences of a station blackout and a large LOCA scenario are selected. A lower head failure model, a Zircaloy oxidation reaction model and a B 4 C reaction model are considered for the sensitivity parameters. As for the base case, 1273.15K for a failure temperature of the penetrations or the lower head, an Urbanic-Heidrich correlation for the Zircaloy oxidation reaction model and the B 4 C reaction model are used. Case 1 used 1650K as the failure temperature for the penetrations and Case 2 considered creep rupture instead of penetration failure. Case 3 used a MATPRO-EG and G correlation for the Zircaloy oxidation reaction model and Case 4 turned off the B 4 C reaction model. The results of the studies are summarized below : (1) When the penetration failure temperature is higher, or the creep rupture failure model is considered, the amount of hydrogen increases for two sequences. (2) When the MATPRO-EG and G correlation for a Zircaloy oxidation reaction is considered, the amount of hydrogen is less than the Urbanic-Heidrich correlation (Base case) for both scenarios. (3) When the B 4 C reaction model turns off, the amount of hydrogen decreases for two sequences
Sensitivity and uncertainty analyses in aging risk-based prioritizations
International Nuclear Information System (INIS)
Hassan, M.; Uryas'ev, S.; Vesely, W.E.
1993-01-01
Aging risk evaluations of nuclear power plants using Probabilistic Risk Analyses (PRAs) involve assessments of the impact of aging structures, systems, and components (SSCs) on plant core damage frequency (CDF). These assessments can be used to prioritize the contributors to aging risk reflecting the relative risk potential of the SSCs. Aging prioritizations are important for identifying the SSCs contributing most to plant risk and can provide a systematic basis on which aging risk control and management strategies for a plant can be developed. However, these prioritizations are subject to variabilities arising from uncertainties in data, and/or from various modeling assumptions. The objective of this paper is to present an evaluation of the sensitivity of aging prioritizations of active components to uncertainties in aging risk quantifications. Approaches for robust prioritization of SSCs also are presented which are less susceptible to the uncertainties
An extensible analysable system model
DEFF Research Database (Denmark)
Probst, Christian W.; Hansen, Rene Rydhof
2008-01-01
Analysing real-world systems for vulnerabilities with respect to security and safety threats is a difficult undertaking, not least due to a lack of availability of formalisations for those systems. While both formalisations and analyses can be found for artificial systems such as software......, this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... are based on (quite successful) ad-hoc techniques. We believe they can be significantly improved beyond the state-of-the-art by pairing them with static analyses techniques. In this paper we present an approach to both formalising those real-world systems, as well as providing an underlying semantics, which...
Sensitivity analyses of the peach bottom turbine trip 2 experiment
International Nuclear Information System (INIS)
Bousbia Salah, A.; D'Auria, F.
2003-01-01
In the light of the sustained development in computer technology, the possibilities for code calculations in predicting more realistic transient scenarios in nuclear power plants have been enlarged substantially. Therefore, it becomes feasible to perform 'Best-estimate' simulations through the incorporation of three-dimensional modeling of reactor core into system codes. This method is particularly suited for complex transients that involve strong feedback effects between thermal-hydraulics and kinetics as well as to transient involving local asymmetric effects. The Peach bottom turbine trip test is characterized by a prompt core power excursion followed by a self limiting power behavior. To emphasize and understand the feedback mechanisms involved during this transient, a series of sensitivity analyses were carried out. This should allow the characterization of discrepancies between measured and calculated trends and assess the impact of the thermal-hydraulic and kinetic response of the used models. On the whole, the data comparison revealed a close dependency of the power excursion with the core feedback mechanisms. Thus for a better best estimate simulation of the transient, both of the thermal-hydraulic and the kinetic models should be made more accurate. (author)
Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience
Directory of Open Access Journals (Sweden)
Monica Carvalho
2013-01-01
Full Text Available This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1 energy service demands of the hospital, (2 technical and economical characteristics of the potential technologies for installation, (3 prices of the available utilities interchanged, and (4 financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc. at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.
Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience
Carvalho, Monica; Lozano, Miguel A.; Ramos, José; Serra, Luis M.
2013-01-01
This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs. PMID:24453881
Synthesis of trigeneration systems: sensitivity analyses and resilience.
Carvalho, Monica; Lozano, Miguel A; Ramos, José; Serra, Luis M
2013-01-01
This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.
Context Sensitive Modeling of Cancer Drug Sensitivity.
Directory of Open Access Journals (Sweden)
Bo-Juen Chen
Full Text Available Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression, an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should-and should not-be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features.
Sensitivity of surface meteorological analyses to observation networks
Tyndall, Daniel Paul
A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.
Sensitivity analyses for simulating pesticide impacts on honey bee colonies
We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop+Pesticide model. Simulations are performed of hive population trajectories with and without pesticide exposure to determine the eff...
Burgess, Stephen; Bowden, Jack; Fall, Tove; Ingelsson, Erik; Thompson, Simon G
2017-01-01
Mendelian randomization investigations are becoming more powerful and simpler to perform, due to the increasing size and coverage of genome-wide association studies and the increasing availability of summarized data on genetic associations with risk factors and disease outcomes. However, when using multiple genetic variants from different gene regions in a Mendelian randomization analysis, it is highly implausible that all the genetic variants satisfy the instrumental variable assumptions. This means that a simple instrumental variable analysis alone should not be relied on to give a causal conclusion. In this article, we discuss a range of sensitivity analyses that will either support or question the validity of causal inference from a Mendelian randomization analysis with multiple genetic variants. We focus on sensitivity analyses of greatest practical relevance for ensuring robust causal inferences, and those that can be undertaken using summarized data. Aside from cases in which the justification of the instrumental variable assumptions is supported by strong biological understanding, a Mendelian randomization analysis in which no assessment of the robustness of the findings to violations of the instrumental variable assumptions has been made should be viewed as speculative and incomplete. In particular, Mendelian randomization investigations with large numbers of genetic variants without such sensitivity analyses should be treated with skepticism.
Adjoint sensitivity and uncertainty analyses in Monte Carlo forward calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2011-01-01
The adjoint-weighted perturbation (AWP) method, in which the required adjoint flux is estimated in the course of Monte Carlo (MC) forward calculations, has recently been proposed as an alternative to the conventional MC perturbation techniques, such as the correlated sampling and differential operator sampling (DOS) methods. The equivalence of the first-order AWP method and first-order DOS method with the fission source perturbation taken into account is proven. An algorithm for the AWP calculations is implemented in the Seoul National University MC code McCARD and applied to the sensitivity and uncertainty analyses of the Godiva and Bigten criticalities. (author)
Directory of Open Access Journals (Sweden)
F. Joos
2011-01-01
Full Text Available A Dynamic Global Vegetation model coupled to a simplified Earth system model is used to simulate the impact of anthropogenic land cover changes (ALCC on Holocene atmospheric CO2 and the contemporary carbon cycle. The model results suggest that early agricultural activities cannot explain the mid to late Holocene CO2 rise of 20 ppm measured on ice cores and that proposed upward revisions of Holocene ALCC imply a smaller contemporary terrestrial carbon sink. A set of illustrative scenarios is applied to test the robustness of these conclusions and to address the large discrepancies between published ALCC reconstructions. Simulated changes in atmospheric CO2 due to ALCC are less than 1 ppm before 1000 AD and 30 ppm at 2004 AD when the HYDE 3.1 ALCC reconstruction is prescribed for the past 12 000 years. Cumulative emissions of 69 GtC at 1850 and 233 GtC at 2004 AD are comparable to earlier estimates. CO2 changes due to ALCC exceed the simulated natural interannual variability only after 1000 AD. To consider evidence that land area used per person was higher before than during early industrialisation, agricultural areas from HYDE 3.1 were increased by a factor of two prior to 1700 AD (scenario H2. For the H2 scenario, the contemporary terrestrial carbon sink required to close the atmospheric CO2 budget is reduced by 0.5 GtC yr−1. Simulated CO2 remains small even in scenarios where average land use per person is increased beyond the range of published estimates. Even extreme assumptions for preindustrial land conversion and high per-capita land use do not result in simulated CO2 emissions that are sufficient to explain the magnitude and the timing of the late Holocene CO2 increase.
Energy Technology Data Exchange (ETDEWEB)
Prindle, R.W.; Hopkins, P.L.
1990-10-01
The Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive-waste repositories. This report describes the results of a study for HYDROCOIN of model sensitivity for isothermal, unsaturated flow through layered, fractured tuffs. We investigated both the types of flow behavior that dominate the performance measures and the conditions and model parameters that control flow behavior. We also examined the effect of different conceptual models and modeling approaches on our understanding of system behavior. The analyses included single- and multiple-parameter variations about base cases in one-dimensional steady and transient flow and in two-dimensional steady flow. The flow behavior is complex even for the highly simplified and constrained system modeled here. The response of the performance measures is both nonlinear and nonmonotonic. System behavior is dominated by abrupt transitions from matrix to fracture flow and by lateral diversion of flow. The observed behaviors are strongly influenced by the imposed boundary conditions and model constraints. Applied flux plays a critical role in determining the flow type but interacts strongly with the composite-conductivity curves of individual hydrologic units and with the stratigraphy. One-dimensional modeling yields conservative estimates of distributions of groundwater travel time only under very limited conditions. This study demonstrates that it is wrong to equate the shortest possible water-travel path with the fastest path from the repository to the water table. 20 refs., 234 figs., 10 tabs.
Graphical models for genetic analyses
DEFF Research Database (Denmark)
Lauritzen, Steffen Lilholt; Sheehan, Nuala A.
2003-01-01
This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....
Peer review of HEDR uncertainty and sensitivity analyses plan
Energy Technology Data Exchange (ETDEWEB)
Hoffman, F.O.
1993-06-01
This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
International Nuclear Information System (INIS)
Becker, D.L.
1994-11-01
Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings
VIPRE modeling of VVER-1000 reactor core for DNB analyses
Energy Technology Data Exchange (ETDEWEB)
Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)
1995-09-01
Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.
On accuracy problems for semi-analytical sensitivity analyses
DEFF Research Database (Denmark)
Pedersen, P.; Cheng, G.; Rasmussen, John
1989-01-01
The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried out...
Uncertainty and sensitivity analyses of the complete program system UFOMOD and of selected submodels
International Nuclear Information System (INIS)
Fischer, F.; Ehrhardt, J.; Hasemann, I.
1990-09-01
Uncertainty and sensitivity studies with the program system UFOMOD have been performed since several years on a submodel basis to get a deeper insight into the propagation of parameter uncertainties through the different modules and to quantify their contribution to the confidence bands of the intermediate and final results of an accident consequence assessment. In a series of investigations with the atmospheric dispersion module, the models describing early protective actions, the models calculating short-term organ doses and the health effects model of the near range subsystem NE of UFOMOD, a great deal of experience has been gained with methods and evaluation techniques for uncertainty and sensitivity analyses. Especially the influence on results of different sampling techniques and sample sizes, parameter distributions and correlations could be quantified and the usefulness of sensitivity measures for the interpretation of results could be demonstrated. In each submodel investigation, the (5%, 95%)-confidende bounds of the complementary cumulative frequency distributions (CCFDs) of various consequence types (activity concentrations of I-131 and Cs-137, individual acute organ doses, individual risks of nonstochastic health effects, and the number of early deaths) were calculated. The corresponding sensitivity analyses for each of these endpoints led to a list of parameters contributing significantly to the variation of mean values and 99% - fractiles. The most important parameters were extracted and combined for the final overall analysis. (orig.) [de
Sensitivity Analyses for Cross-Coupled Parameters in Automotive Powertrain Optimization
Directory of Open Access Journals (Sweden)
Pongpun Othaganont
2014-06-01
Full Text Available When vehicle manufacturers are developing new hybrid and electric vehicles, modeling and simulation are frequently used to predict the performance of the new vehicles from an early stage in the product lifecycle. Typically, models are used to predict the range, performance and energy consumption of their future planned production vehicle; they also allow the designer to optimize a vehicle’s configuration. Another use for the models is in performing sensitivity analysis, which helps us understand which parameters have the most influence on model predictions and real-world behaviors. There are various techniques for sensitivity analysis, some are numerical, but the greatest insights are obtained analytically with sensitivity defined in terms of partial derivatives. Existing methods in the literature give us a useful, quantified measure of parameter sensitivity, a first-order effect, but they do not consider second-order effects. Second-order effects could give us additional insights: for example, a first order analysis might tell us that a limiting factor is the efficiency of the vehicle’s prime-mover; our new second order analysis will tell us how quickly the efficiency of the powertrain will become of greater significance. In this paper, we develop a method based on formal optimization mathematics for rapid second-order sensitivity analyses and illustrate these through a case study on a C-segment electric vehicle.
Sensitivity Assessment of Ozone Models
Energy Technology Data Exchange (ETDEWEB)
Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.
2000-01-24
The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.
Modelling and analysing oriented fibrous structures
International Nuclear Information System (INIS)
Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J
2014-01-01
A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth
Satoshi Hirabayashi; Chuck Kroll; David Nowak
2011-01-01
The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...
Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo
2017-12-01
Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modelling and Analysing Socio-Technical Systems
DEFF Research Database (Denmark)
Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming
2015-01-01
Modern organisations are complex, socio-technical systems consisting of a mixture of physical infrastructure, human actors, policies and processes. An in-creasing number of attacks on these organisations exploits vulnerabilities on all different levels, for example combining a malware attack...... with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...
Sensitivity Study of Stochastic Walking Load Models
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2010-01-01
is to employ a stochastic load model accounting for mean values and standard deviations for the walking load parameters, and to use this as a basis for estimation of structural response. This, however, requires decisions to be made in terms of statistical istributions and their parameters, and the paper...... investigates whether statistical distributions of bridge response are sensitive to some of the decisions made by the engineer doing the analyses. For the paper a selected part of potential influences are examined and footbridge responses are extracted using Monte-Carlo simulations and focus is on estimating...
Jawitz, James W.; Munoz-Carpena, Rafael; Muller, Stuart; Grace, Kevin A.; James, Andrew I.
2008-01-01
in the phosphorus cycling mechanisms were simulated in these case studies using different combinations of phosphorus reaction equations. Changes in water column phosphorus concentrations observed under the controlled conditions of laboratory incubations, and mesocosm studies were reproduced with model simulations. Short-term phosphorus flux rates and changes in phosphorus storages were within the range of values reported in the literature, whereas unknown rate constants were used to calibrate the model output. In STA-1W Cell 4, the dominant mechanism for phosphorus flow and transport is overland flow. Over many life cycles of the biological components, however, soils accrue and become enriched in phosphorus. Inflow total phosphorus concentrations and flow rates for the period between 1995 and 2000 were used to simulate Cell 4 phosphorus removal, outflow concentrations, and soil phosphorus enrichment over time. This full-scale application of the model successfully incorporated parameter values derived from the literature and short-term experiments, and reproduced the observed long-term outflow phosphorus concentrations and increased soil phosphorus storage within the system. A global sensitivity and uncertainty analysis of the model was performed using modern techniques such as a qualitative screening tool (Morris method) and the quantitative, variance-based, Fourier Amplitude Sensitivity Test (FAST) method. These techniques allowed an in-depth exploration of the effect of model complexity and flow velocity on model outputs. Three increasingly complex levels of possible application to southern Florida were studied corresponding to a simple soil pore-water and surface-water system (level 1), the addition of plankton (level 2), and of macrophytes (level 3). In the analysis for each complexity level, three surface-water velocities were considered that each correspond to residence times for the selected area (1-kilometer long) of 2, 10, and 20
Variance-based sensitivity indices for models with dependent inputs
International Nuclear Information System (INIS)
Mara, Thierry A.; Tarantola, Stefano
2012-01-01
Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.
Structural sensitivity of biological models revisited.
Cordoleani, Flora; Flora, Cordoleani; Nerini, David; David, Nerini; Gauduchon, Mathias; Mathias, Gauduchon; Morozov, Andrew; Andrew, Morozov; Poggiale, Jean-Christophe; Jean-Christophe, Poggiale
2011-08-21
Enhancing the predictive power of models in biology is a challenging issue. Among the major difficulties impeding model development and implementation are the sensitivity of outcomes to variations in model parameters, the problem of choosing of particular expressions for the parametrization of functional relations, and difficulties in validating models using laboratory data and/or field observations. In this paper, we revisit the phenomenon which is referred to as structural sensitivity of a model. Structural sensitivity arises as a result of the interplay between sensitivity of model outcomes to variations in parameters and sensitivity to the choice of model functions, and this can be somewhat of a bottleneck in improving the models predictive power. We provide a rigorous definition of structural sensitivity and we show how we can quantify the degree of sensitivity of a model based on the Hausdorff distance concept. We propose a simple semi-analytical test of structural sensitivity in an ODE modeling framework. Furthermore, we emphasize the importance of directly linking the variability of field/experimental data and model predictions, and we demonstrate a way of assessing the robustness of modeling predictions with respect to data sampling variability. As an insightful illustrative example, we test our sensitivity analysis methods on a chemostat predator-prey model, where we use laboratory data on the feeding of protozoa to parameterize the predator functional response. Copyright © 2011 Elsevier Ltd. All rights reserved.
Demonstrating the efficiency of the EFPC criterion by means of Sensitivity analyses
Energy Technology Data Exchange (ETDEWEB)
Munier, Raymond
2007-04-15
Within the framework of a project to characterise large fractures, a modelling effort was initiated to evaluate the use of a pair of full perimeter criteria, FPC and EFPC, for detecting fractures that could jeopardize the integrity of the canisters in the case of a large nearby earthquake. Though some sensitivity studies were performed in the method study of these mainly targeted aspects of Monte-Carlo simulations. The impact of uncertainties in the DFN model upon the efficiency of the FPI criteria was left unattended. The main purpose of this report is, therefore, to explore the impact of DFN variability upon the efficiency of the FPI criteria. The outcome of the present report may thus be regarded as complementary analyses to the ones presented in SKB-R-06-54. To appreciate the details of the present report, the reader should be acquainted with the simulation procedure described the earlier report. The most important conclusion of this study is that the efficiency of the EFPC is high for all tested model variants. That is, compared to blind deposition, the EFPC is a very powerful tool to identify unsuitable deposition holes and it is essentially insensitive to variations in the DFN Model. If information from adjacent tunnels is used in addition to EFPC, then the probability of detecting a critical deposition hole is almost 100%.
Sensitivity Study of Poisson's Ratio Used in Soil Structure Interaction (SSI) Analyses
Energy Technology Data Exchange (ETDEWEB)
Han, Seung-ju [KHNP CRI, Daejeon (Korea, Republic of); You, Dong-Hyun [KEPCO Engineering and Construction, Gimcheon (Korea, Republic of); Jang, Jung-bum; Yun, Kwan-hee [KEPCO Research Institute, Daejeon (Korea, Republic of)
2016-10-15
The preliminary review for Design Certification (DC) of APR1400 was accepted by NRC on March 4, 2015. After the acceptance of the application for standard DC of APR1400, KHNP has responded the Request for Additional Information (RAI) raised by NRC to undertake a full design certification review. Design certification is achieved through the NRC's rulemaking process, and is founded on the staff's review of the application, which addresses the various safety issues associated with the proposed nuclear power plant design, independent of a specific site. The USNRC issued RAIs pertain to Design Control Document (DCD) Ch.3.7 'Seismic Design' is DCD Tables 3.7A-1 and 3.7A-2 show Poisson’s ratios in the S1 and S2 soil profiles used for SSI analysis as great as 0.47 and 0.48 respectively. Based on staff experience, use of Poisson's ratio approaching these values may result in numerical instability of the SSI analysis results. Sensitivity study is performed using the ACS SASSI NI model of APR1400 with S1 and S2 soil profiles to demonstrate that the Poisson’s ratio values used in the SSI analyses of S1 and S2 soil profile cases do not produce numerical instabilities in the SSI analysis results. No abrupt changes or spurious peaks, which tend to indicate existence of numerical sensitivities in the SASSI solutions, appear in the computed transfer functions of the original SSI analyses that have the maximum dynamic Poisson’s ratio values of 0.47 and 0.48 as well as in the re-computed transfer functions that have the maximum dynamic Poisson’s ratio values limited to 0.42 and 0.45.
Sensitivity analyses of seismic behavior of spent fuel dry cask storage systems
International Nuclear Information System (INIS)
Luk, V.K.; Spencer, B.W.; Shaukat, S.K.; Lam, I.P.; Dameron, R.A.
2003-01-01
Sandia National Laboratories is conducting a research project to develop a comprehensive methodology for evaluating the seismic behavior of spent fuel dry cask storage systems (DCSS) for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission (NRC). A typical Independent Spent Fuel Storage Installation (ISFSI) consists of arrays of free-standing storage casks resting on concrete pads. In the safety review process of these cask systems, their seismically induced horizontal displacements and angular rotations must be quantified to determine whether casks will overturn or neighboring casks will collide during a seismic event. The ABAQUS/Explicit code is used to analyze three-dimensional coupled finite element models consisting of three submodels, which are a cylindrical cask or a rectangular module, a flexible concrete pad, and an underlying soil foundation. The coupled model includes two sets of contact surfaces between the submodels with prescribed coefficients of friction. The seismic event is described by one vertical and two horizontal components of statistically independent seismic acceleration time histories. A deconvolution procedure is used to adjust the amplitudes and frequency contents of these three-component reference surface motions before applying them simultaneously at the soil foundation base. The research project focused on examining the dynamic and nonlinear seismic behavior of the coupled model of free-standing DCSS including soil-structure interaction effects. This paper presents a subset of analysis results for a series of parametric analyses. Input variables in the parametric analyses include: designs of the cask/module, time histories of the seismic accelerations, coefficients of friction at the cask/pad interface, and material properties of the soil foundation. In subsequent research, the analysis results will be compiled and presented in nomograms to highlight the sensitivity of seismic response of DCSS to
International Nuclear Information System (INIS)
Kaplan, P.G.
1993-01-01
Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain
Externalizing Behaviour for Analysing System Models
DEFF Research Database (Denmark)
Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof
2013-01-01
System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...
Battlescale Forecast Model Sensitivity Study
National Research Council Canada - National Science Library
Sauter, Barbara
2003-01-01
.... Changes to the surface observations used in the Battlescale Forecast Model initialization led to no significant changes in the resulting forecast values of temperature, relative humidity, wind speed, or wind direction...
Model Driven Development of Data Sensitive Systems
DEFF Research Database (Denmark)
Olsen, Petur
2014-01-01
to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...... detection and pushing error detection to earlier stages of development. The complexity of modeling and the size of systems which can be analyzed is severely limited when introducing data variables. The state space grows exponentially in the number of variable and the domain size of the variables...
An approach to measure parameter sensitivity in watershed hydrological modelling
Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...
Modelling and Analyses of Embedded Systems Design
DEFF Research Database (Denmark)
Brekling, Aske Wiid
We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...
Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...
Radiobiological analyse based on cell cluster models
International Nuclear Information System (INIS)
Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng
2010-01-01
The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)
msgbsR: An R package for analysing methylation-sensitive restriction enzyme sequencing data.
Mayne, Benjamin T; Leemaqz, Shalem Y; Buckberry, Sam; Rodriguez Lopez, Carlos M; Roberts, Claire T; Bianco-Miotto, Tina; Breen, James
2018-02-01
Genotyping-by-sequencing (GBS) or restriction-site associated DNA marker sequencing (RAD-seq) is a practical and cost-effective method for analysing large genomes from high diversity species. This method of sequencing, coupled with methylation-sensitive enzymes (often referred to as methylation-sensitive restriction enzyme sequencing or MRE-seq), is an effective tool to study DNA methylation in parts of the genome that are inaccessible in other sequencing techniques or are not annotated in microarray technologies. Current software tools do not fulfil all methylation-sensitive restriction sequencing assays for determining differences in DNA methylation between samples. To fill this computational need, we present msgbsR, an R package that contains tools for the analysis of methylation-sensitive restriction enzyme sequencing experiments. msgbsR can be used to identify and quantify read counts at methylated sites directly from alignment files (BAM files) and enables verification of restriction enzyme cut sites with the correct recognition sequence of the individual enzyme. In addition, msgbsR assesses DNA methylation based on read coverage, similar to RNA sequencing experiments, rather than methylation proportion and is a useful tool in analysing differential methylation on large populations. The package is fully documented and available freely online as a Bioconductor package ( https://bioconductor.org/packages/release/bioc/html/msgbsR.html ).
Sensitivities and uncertainties of modeled ground temperatures in mountain environments
Directory of Open Access Journals (Sweden)
S. Gubler
2013-08-01
Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several
Sensitivity-Based Guided Model Calibration
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
the sensitivity of evapotranspiration models to errors in model ...
African Journals Online (AJOL)
Dr Obe
ABSTRACT. Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –. Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was ...
Criticality safety and sensitivity analyses of PWR spent nuclear fuel repository facilities
Maucec, M; Glumac, B
Monte Carlo criticality safety and sensitivity calculations of pressurized water reactor (PWR) spent nuclear fuel repository facilities for the Slovenian nuclear power plant Krsko are presented. The MCNP4C code was deployed to model and assess the neutron multiplication parameters of pool-based
Sensitivity analyses of biodiesel thermo-physical properties under diesel engine conditions
DEFF Research Database (Denmark)
Cheng, Xinwei; Ng, Hoon Kiat; Gan, Suyin
2016-01-01
This reported work investigates the sensitivities of spray and soot developments to the change of thermo-physical properties for coconut and soybean methyl esters, using two-dimensional computational fluid dynamics fuel spray modelling. The choice of test fuels made was due to their contrasting s...
LBLOCA sensitivity analysis using meta models
International Nuclear Information System (INIS)
Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.
2014-01-01
This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)
Sensitivity of weather besed irrigation scheduling model
International Nuclear Information System (INIS)
Laghari, K.Q.; Lashari, B.K.; Laghari, N.U.Z.
2009-01-01
This study describes the sensitivity of irrigation scheduling model (Mehran) carried out by changing input weather parameters (Temperatures, Wind velocity, Rainfall, and Sunshine hours) to see model sensitivity in computation/estimations (output) for Transpiration (T), Evaporation (E), and allocation of irrigation (I) water. Sensitivity analysis depends on the site and environmental conditions and is therefore an essential step in model validation and application. Mehran Model is weather based crop growth simulation model, which uses daily input data of max and min temperatures (temp), dew point temp (humidity), wind speed, daily sunshine hours (radiation) and computes T/sub c/E/sub s/, and allocates Irrigation accordingly. The input and output base values are taken as an average of three years actual field data used during the Mehran Model testing and calibration on wheat and cotton crops. The model sensitivity of specific input parameter was obtained by varying its value and keeping other input parameters at their base values. The input base values varied by+-10 and +-25%. The model was run for each modified input parameter, and output was compared statistically with base outputs. The ME% (Mean Percent Error) was used to obtain variations in output values. The results reveal that the model is most sensitive with variations in temperature. The 10 and 25% increase in temperature resulted increase in Cotton crop's Tc by 12.18 and 28.54%, corresponding Es by 22.32 and 37.88% and irrigation water allocation by 18.41 and 47.83 % respectively increased from average base values. (author)
SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review
International Nuclear Information System (INIS)
Achilli, A.; Congiu, C.; Ferri, R.; Bianchi, F.; Meloni, P.; Grgic, D.; Dzodzo, M.
2012-01-01
An Italian MSE R and D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.
SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review
Directory of Open Access Journals (Sweden)
Andrea Achilli
2012-01-01
Full Text Available An Italian MSE R&D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a single first order ordinary differential equation. In this analysis, we will study the sensitivity analysis due to a variation of the initial condition ...
Lugo-Palacios, DG; Cairns, J
2015-01-01
Ambulatory care sensitive hospitalisations (ACSH) have been widely used to study the quality and effectiveness of primary care. Using data from 248 general hospitals in Mexico during 2001-2011 we identify 926,769 ACSHs in 188 health jurisdictions before and during the health insurance expansion that took place in this period, and estimate a fixed effects model to explain the association of the jurisdiction ACSH rate with patient and community factors. National ACSH rate increased by 50%, but ...
Numerical modeling of shock-sensitivity experiments
Energy Technology Data Exchange (ETDEWEB)
Bowman, A.L.; Forest, C.A.; Kershner, J.D.; Mader, C.L.; Pimbley, G.H.
1981-01-01
The Forest Fire rate model of shock initiation of heterogeneous explosives has been used to study several experiments commonly performed to measure the sensitivity of explosives to shock and to study initiation by explosive-formed jets. The minimum priming charge test, the gap test, the shotgun test, sympathetic detonation, and jet initiation have been modeled numerically using the Forest Fire rate in the reactive hydrodynamic codes SIN and 2DE.
An approach of sensitivity and uncertainty analyses methods installation in a safety calculation
International Nuclear Information System (INIS)
Pepin, G.; Sallaberry, C.
2003-01-01
Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA
Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models
Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano
2017-07-01
The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Screening design for model sensitivity studies
Welsh, James P.; Koenig, George G.; Bruce, Dorothy
1997-07-01
This paper describes a different approach to sensitivity studies for environmental, including atmospheric, physics models. The sensitivity studies were performed on a multispectral environmental data and scene generation capability. The capability includes environmental physics models that are used to generate data and scenes for simulation of environmental materials, features, and conditions, such as trees, clouds, soils, and snow. These studies were performed because it is difficult to obtain input data for many of the environmental variables. The problem to solve is to determine which of the 100 or so input variables, used by the generation capability, are the most important. These sensitivity studies focused on the generation capabilities needed to predict and evaluate the performance of sensor systems operating in the infrared portions of the electromagnetic spectrum. The sensitivity study approach described uses a screening design. Screening designs are analytical techniques that use a fraction of all of the combinations of the potential input variables and conditions to determine which are the most important. Specifically a 20-run Plackett-Burman screening design was used to study the sensitivity of eight data and scene generation capability computed response variables to 11 selected input variables. This is a two-level design, meaning that the range of conditions is represented by two different values for each of the 11 selected variables. The eight response variables used were maximum, minimum, range, and mode of the model-generated temperature and radiance values. The result is that six of the 11 input variables (soil moisture, solar loading, roughness length, relative humidity, surface albedo, and surface emissivity) had a statistically significant effect on the response variables.
International Nuclear Information System (INIS)
Cloquell-Ballester, Vicente-Agustin; Monterde-Diaz, Rafael; Cloquell-Ballester, Victor-Andres; Santamarina-Siurana, Maria-Cristina
2007-01-01
Assessing the significance of environmental impacts is one of the most important and all together difficult processes of Environmental Impact Assessment. This is largely due to the multicriteria nature of the problem. To date, decision techniques used in the process suffer from two drawbacks, namely the problem of compensation and the problem of identification of the 'exact boundary' between sub-ranges. This article discusses these issues and proposes a methodology for determining the significance of environmental impacts based on comparative and sensitivity analyses using the Electre TRI technique. An application of the methodology for the environmental assessment of a Power Plant project within the Valencian Region (Spain) is presented, and its performance evaluated. It is concluded that contrary to other techniques, Electre TRI automatically identifies those cases where allocation of significance categories is most difficult and, when combined with sensitivity analysis, offers greatest robustness in the face of variation in weights of the significance attributes. Likewise, this research demonstrates the efficacy of systematic comparison between Electre TRI and sum-based techniques, in the solution of assignment problems. The proposed methodology can therefore be regarded as a successful aid to the decision-maker, who will ultimately take the final decision
Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses
Energy Technology Data Exchange (ETDEWEB)
Nie J.; Braverman J.; Costantino, M.
2013-10-31
This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.
Sensitivities in global scale modeling of isoprene
Directory of Open Access Journals (Sweden)
R. von Kuhlmann
2004-01-01
Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.
Applying incentive sensitization models to behavioral addiction
DEFF Research Database (Denmark)
Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne
2014-01-01
The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....
Precipitates/Salts Model Sensitivity Calculation
International Nuclear Information System (INIS)
Mariner, P.
2001-01-01
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift
Sensitivity analyses of acoustic impedance inversion with full-waveform inversion
Yao, Gang; da Silva, Nuno V.; Wu, Di
2018-04-01
Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.
Healthy volunteers can be phenotyped using cutaneous sensitization pain models.
Directory of Open Access Journals (Sweden)
Mads U Werner
Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.
Energy Technology Data Exchange (ETDEWEB)
Jacques, J
2005-12-15
Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)
Process verification of a hydrological model using a temporal parameter sensitivity analysis
M. Pfannerstill; B. Guse; D. Reusser; N. Fohrer
2015-01-01
To ensure reliable results of hydrological models, it is essential that the models reproduce the hydrological process dynamics adequately. Information about simulated process dynamics is provided by looking at the temporal sensitivities of the corresponding model parameters. For this, the temporal dynamics of parameter sensitivity are analysed to identify the simulated hydrological processes. Based on these analyses it can be verified if the simulated hydrological processes ...
International Nuclear Information System (INIS)
Hotchkis, Michael; Fink, David; Tuniz, Claudio; Vogt, Stephan
2000-01-01
Accelerator Mass Spectrometry (AMS) is the analytical technique of choice for the detection of long-lived radionuclides which cannot be practically analysed with decay counting or conventional mass spectrometry. AMS allows an isotopic sensitivity as low as one part in 10 15 for 14 C (5.73 ka), 10 Be (1.6 Ma), 26 Al (720 ka), 36 Cl (301 ka), 41 Ca (104 ka), 129 I (16 Ma) and other long-lived radionuclides occurring in nature at ultra-trace levels. These radionuclides can be used as tracers and chronometers in many disciplines: geology, archaeology, astrophysics, biomedicine and materials science. Low-level decay counting techniques have been developed in the last 40-50 years to detect the concentration of cosmogenic, radiogenic and anthropogenic radionuclides in a variety of specimens. Radioactivity measurements for long-lived radionuclides are made difficult by low counting rates and in some cases the need for complicated radiochemistry procedures and efficient detectors of soft β-particles and low energy x-rays. The sensitivity of AMS is unaffected by the half-life of the isotope being measured, since the atoms not the radiations that result from their decay, are counted directly. Hence, the efficiency of AMS in the detection of long-lived radionuclides is 10 6 -10 9 times higher than decay counting and the size of the sample required for analysis is reduced accordingly. For example, 14 C is being analysed in samples containing as little as 20 μg carbon. There is also a world-wide effort to use AMS for the analysis of rare nuclides of heavy mass, such as actinides, with important applications in safeguards and nuclear waste disposal. Finally, AMS microprobes are being developed for the in-situ analysis of stable isotopes in geological samples, semiconductors and other materials. Unfortunately, the use of AMS is limited by the expensive accelerator technology required, but there are several attempts to develop compact AMS spectrometers at low (≤0.5 MV) terminal
Sensitivity Analysis of a process based erosion model using FAST
Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin
2015-04-01
Erosion, sediment redistribution and related particulate transport are severe problems in agro-ecosystems with highly erodible loess soils. They are controlled by various factors, for example rainfall intensity, topography, initial wetness conditions, spatial patterns of soil hydraulic parameters, land use and tillage practice. The interplay between those factors is not well understood. A number of models were developed to indicate those complex interactions and to estimate the amount of sediment which will be removed, transported and accumulated. In order to make use of physical-based models to provide insight on the physical system under study it is necessary to understand the interactions of parameters and processes in the model domain. Sensitivity analyses give insight in the relative importance of model parameters, which in addition is useful for judging where the greatest efforts have to be spent in acquiring or calibrating input parameters. The objective of this study was to determine the sensitivity of the erosion-related parameters in the CATFLOW model. We analysed simulations from the Weiherbach catchment, where good matches of observed hydrological response and erosion dynamics had been obtained in earlier studies. The Weiherbach catchment is located in an intensively cultivated loess region in southwest Germany and due to the hilly landscape and the highly erodible loess soils, erosion is a severe environmental problem. CATFLOW is a process-based hydrology and erosion model that can operate on catchment and hillslope scales. Soil water dynamics are described by the Richards equation including effective approaches for preferential flow. Evapotranspiration is simulated using an approach based on the Penman-Monteith equation. The model simulates overland flow using the diffusion wave equation. Soil detachment is related to the attacking forces of rainfall and overland flow, and the erosion resistance of the soil. Sediment transport capacity and sediment
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Directory of Open Access Journals (Sweden)
Gerhard Strydom
2013-01-01
Full Text Available The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC transient PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS or Latin Hypercube Sampling (LHS data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.
Analysing the temporal dynamics of model performance for hydrological models
Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.
2009-01-01
The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
Applying incentive sensitization models to behavioral addiction.
Rømer Thomsen, Kristine; Fjorback, Lone O; Møller, Arne; Lou, Hans C
2014-09-01
The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling Citable Textual Analyses for the Homer Multitext
Directory of Open Access Journals (Sweden)
Christopher William Blackwell
2016-12-01
Full Text Available The 'Homer Multitext' project (hmt is documenting the language and structure of Greek epic poetry, and the ancient tradition of commentary on it. The project’s primary data consist of editions of Greek texts; automated and manually created readings analyze the texts across historical and thematic axes. This paper describes an abstract model we follow in documenting an open-ended body of diverse analyses. The analyses apply to passages of texts at different levels of granularity; they may refer to overlapping or mutually exclusive passages of text; and they may apply to non-contiguous passages of text. All are recorded in with explicit, concise, machine-actionable canonical citation of both text passage and analysis in a scheme aligning all analyses to a common notional text. We cite our texts with urns that capture a passage’s position in an 'Ordered Hierarchy of Citation Objects' (ohco2. Analyses are modeled as data-objects with five properties. We create collections of ‘analytical objects’, each uniquely identified by its own urn and each aligned to a particular edition of a text by a urn citation. We can view these analytical objects as an extension of the edition’s citation hierarchy; since they are explicitly ordered by their alignment with the edition they analyze, each collection of analyses meets satisfies the (ohco2 model of a citable text. We call these texts that are derived from and aligned to an edition ‘analytical exemplars’.
DEFF Research Database (Denmark)
Larsen, Lesli Hingstrup; Ängquist, Lars Henrik; Vimaleswaran, Karani S
2012-01-01
Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes.......Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes....
Use of flow models to analyse loss of coolant accidents
International Nuclear Information System (INIS)
Pinet, Bernard
1978-01-01
This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr
Sensitivity and uncertainty analysis of a polyurethane foam decomposition model
Energy Technology Data Exchange (ETDEWEB)
HOBBS,MICHAEL L.; ROBINSON,DAVID G.
2000-03-14
Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.
Sensitivity analyses of OH missing sinks over Tokyo metropolitan area in the summer of 2007
Directory of Open Access Journals (Sweden)
K. Ishii
2009-11-01
Full Text Available OH reactivity is one of key indicators which reflect impacts of photochemical reactions in the atmosphere. An observation campaign has been conducted in the summer of 2007 at the heart of Tokyo metropolitan area to measure OH reactivity. The total OH reactivity measured directly by the laser-induced pump and probe technique was higher than the sum of the OH reactivity calculated from concentrations and reaction rate coefficients of individual species measured in this campaign. And then, three-dimensional air quality simulation has been conducted to evaluate the simulation performance on the total OH reactivity including "missing sinks", which correspond to the difference between the measured and calculated total OH reactivity. The simulated OH reactivity is significantly underestimated because the OH reactivity of volatile organic compounds (VOCs and missing sinks are underestimated. When scaling factors are applied to input emissions and boundary concentrations, a good agreement is observed between the simulated and measured concentrations of VOCs. However, the simulated OH reactivity of missing sinks is still underestimated. Therefore, impacts of unidentified missing sinks are investigated through sensitivity analyses. In the cases that unknown secondary products are assumed to account for unidentified missing sinks, they tend to suppress formation of secondary aerosol components and enhance formation of ozone. In the cases that unidentified primary emitted species are assumed to account for unidentified missing sinks, a variety of impacts may be observed, which could serve as precursors of secondary organic aerosols (SOA and significantly increase SOA formation. Missing sinks are considered to play an important role in the atmosphere over Tokyo metropolitan area.
Modelling of intermittent microwave convective drying: parameter sensitivity
Directory of Open Access Journals (Sweden)
Zhang Zhijun
2017-06-01
Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Modelling of intermittent microwave convective drying: parameter sensitivity
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach
Directory of Open Access Journals (Sweden)
Alistair McNair Senior
2016-01-01
Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
SVM models for analysing the headstreams of mine water inrush
Energy Technology Data Exchange (ETDEWEB)
Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics
2007-08-15
The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pohl, Andrew Phillip [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jordan, Dirk [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Energy Technology Data Exchange (ETDEWEB)
Helton, J.C. [Arizona State Univ., Tempe, AZ (United States); Bean, J.E. [New Mexico Engineering Research Inst., Albuquerque, NM (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States); Garner, J.W.; Vaughn, P. [Applied Physics, Inc., Albuquerque, NM (United States); Schreiber, J.D. [Science Applications International Corp., Albuquerque, NM (United States); Swift, P.N. [Tech Reps, Inc., Albuquerque, NM (United States)
1993-08-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability.
Performance of neutron kinetics models for ADS transient analyses
International Nuclear Information System (INIS)
Rineiski, A.; Maschek, W.; Rimpault, G.
2002-01-01
Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One
Lugo-Palacios, David G; Cairns, John
2015-11-01
Ambulatory care sensitive hospitalisations (ACSH) have been widely used to study the quality and effectiveness of primary care. Using data from 248 general hospitals in Mexico during 2001-2011 we identify 926,769 ACSHs in 188 health jurisdictions before and during the health insurance expansion that took place in this period, and estimate a fixed effects model to explain the association of the jurisdiction ACSH rate with patient and community factors. National ACSH rate increased by 50%, but trends and magnitude varied at the jurisdiction and state level. We find strong associations of the ACSH rate with socioeconomic conditions, health care supply and health insurance coverage even after controlling for potential endogeneity in the rolling out of the insurance programme. We argue that the traditional focus on the increase/decrease of the ACSH rate might not be a valid indicator to assess the effectiveness of primary care in a health insurance expansion setting, but that the ACSH rate is useful when compared between and within states once the variation in insurance coverage is taken into account as it allows the identification of differences in the provision of primary care. The high heterogeneity found in the ACSH rates suggests important state and jurisdiction differences in the quality and effectiveness of primary care in Mexico. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tackifier Mobility in Model Pressure Sensitive Adhesives
Paiva, Adriana; Li, Xiaoqing
1997-03-01
A systematic study of the molecular mobility of tackifier in a pressure sensitive adhesive (PSA) has been done for the first time. The objective is to relate changes in adhesive performance with tackifier loading to tackifier mobility. Study focused first on a model PSA consisting of anionically polymerized polyisoprene (PI) (Mw=300,000 Mw/Mn 1.05) and a single simple tackifier, n-butyl ester of abietic acid. This model system is fully miscible at room temperature, and its tack performance has been studied. Tackifier mobility was measured using Pulsed-Gradient Spin-Echo NMR as a function of tackifier concentration and temperature. The concentration dependence observed for this adhesive with modestly enhanced performance was weak, indicating the tackifier neither acts to plasticize or antiplasticize appreciably. Diffusion in a two-phase system of hydrogenated PI with the same tackifier is similar, though the tack of that adhesive varies much more markedly with composition. In contrast, tackifier mobility varies strongly with composition in a PSA composed of PI with a commercial tackifier chemically similar to the model tackifier, but having a higher molecular weight and glass transition temperature. * Supported in part by US DOD: ARO(DAAH04-93-G-0410)
Performance Assessment and Sensitivity Analyses of Disposal of Plutonium as Can-in-Canister Ceramic
International Nuclear Information System (INIS)
Rainer Senger
2001-01-01
The purpose of this analysis is to examine whether there is a justification for using high-level waste (HLW) as a surrogate for plutonium disposal in can-in-canister ceramic in the total-system performance assessment (TSPA) model for the Site Recommendation (SR). In the TSPA-SR model, the immobilized plutonium waste form is not explicitly represented, but is implicitly represented as an equal number of canisters of HLW. There are about 50 metric tons of plutonium in the U. S. Department of Energy inventory of surplus fissile material that could be disposed. Approximately 17 tons of this material contain significant quantities of impurities and are considered unsuitable for mixed-oxide (MOX) reactor fuel. This material has been designated for direct disposal by immobilization in a ceramic waste form and encapsulating this waste form in high-level waste (HLW). The remaining plutonium is suitable for incorporation into MOX fuel assemblies for commercial reactors (Shaw 1999, Section 2). In this analysis, two cases of immobilized plutonium disposal are analyzed, the 17-ton case and the 13-ton case (Shaw et al. 2001, Section 2.2). The MOX spent-fuel disposal is not analyzed in this report. In the TSPA-VA (CRWMS M and O 1998a, Appendix B, Section B-4), the calculated dose release from immobilized plutonium waste form (can-in-canister ceramic) did not exceed that from an equivalent amount of HLW glass. This indicates that the HLW could be used as a surrogate for the plutonium can-in-canister ceramic. Representation of can-in-canister ceramic as a surrogate is necessary to reduce the number of waste forms in the TSPA model. This reduction reduces the complexity and running time of the TSPA model and makes the analyses tractable. This document was developed under a Technical Work Plan (CRWMS M and O 2000a), and is compliant with that plan. The application of the Quality Assurance (QA) program to the development of that plan (CRWMS M and O 2000a) and of this Analysis is
Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model
DEFF Research Database (Denmark)
Klinge Jacobsen, Henrik
2000-01-01
This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...
Structure and sensitivity analysis of individual-based predator–prey models
International Nuclear Information System (INIS)
Imron, Muhammad Ali; Gergs, Andre; Berger, Uta
2012-01-01
The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.
Tsunami propagation modelling – a sensitivity study
Directory of Open Access Journals (Sweden)
P. Tkalich
2007-12-01
Full Text Available Indian Ocean (2004 Tsunami and following tragic consequences demonstrated lack of relevant experience and preparedness among involved coastal nations. After the event, scientific and forecasting circles of affected countries have started a capacity building to tackle similar problems in the future. Different approaches have been used for tsunami propagation, such as Boussinesq and Nonlinear Shallow Water Equations (NSWE. These approximations were obtained assuming different relevant importance of nonlinear, dispersion and spatial gradient variation phenomena and terms. The paper describes further development of original TUNAMI-N2 model to take into account additional phenomena: astronomic tide, sea bottom friction, dispersion, Coriolis force, and spherical curvature. The code is modified to be suitable for operational forecasting, and the resulting version (TUNAMI-N2-NUS is verified using test cases, results of other models, and real case scenarios. Using the 2004 Tsunami event as one of the scenarios, the paper examines sensitivity of numerical solutions to variation of different phenomena and parameters, and the results are analyzed and ranked accordingly.
International Nuclear Information System (INIS)
Sasamoto, Hiroshi; Wilson, James; Sato, Tsutomu
2013-01-01
Performance assessment of geological disposal systems for high-level radioactive waste requires a consideration of long-term systems behaviour. It is possible that the alteration of swelling clay present in bentonite buffers might have an impact on buffer functions. In the present study, iron (as a candidate overpack material)-bentonite (I-B) interactions were evaluated as the main buffer alteration scenario. Existing knowledge on alteration of bentonite during I-B interactions was first reviewed, then the evaluation methodology was developed considering modeling techniques previously used overseas. A conceptual model for smectite alteration during I-B interactions was produced. The following reactions and processes were selected: 1) release of Fe 2+ due to overpack corrosion; 2) diffusion of Fe 2+ in compacted bentonite; 3) sorption of Fe 2+ on smectite edge and ion exchange in interlayers; 4) dissolution of primary phases and formation of alteration products. Sensitivity analyses were performed to identify the most important factors for the alteration of bentonite by I-B interactions. (author)
Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling
Energy Technology Data Exchange (ETDEWEB)
Du, Qiang [Pennsylvania State Univ., State College, PA (United States)
2014-11-12
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next
Analysing earthquake slip models with the spatial prediction comparison test
Zhang, L.
2014-11-10
Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.
Sensitivity analysis practices: Strategies for model-based inference
Energy Technology Data Exchange (ETDEWEB)
Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)
2006-10-15
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.
Sensitivity analysis practices: Strategies for model-based inference
International Nuclear Information System (INIS)
Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca
2006-01-01
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA
Sensitivity analysis of numerical model of prestressed concrete containment
Energy Technology Data Exchange (ETDEWEB)
Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz
2015-12-15
Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.
Sensitivity analysis of Smith's AMRV model
International Nuclear Information System (INIS)
Ho, Chih-Hsiang
1995-01-01
Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
Sensitivity study of reduced models of the activated sludge process ...
African Journals Online (AJOL)
The problem of derivation and calculation of sensitivity functions for all parameters of the mass balance reduced model of the COST benchmark activated sludge plant is formulated and solved. The sensitivity functions, equations and augmented sensitivity state space models are derived for the cases of ASM1 and UCT ...
Integration efficiency for model reduction in micro-mechanical analyses
van Tuijl, Rody A.; Remmers, Joris J. C.; Geers, Marc G. D.
2017-11-01
Micro-structural analyses are an important tool to understand material behavior on a macroscopic scale. The analysis of a microstructure is usually computationally very demanding and there are several reduced order modeling techniques available in literature to limit the computational costs of repetitive analyses of a single representative volume element. These techniques to speed up the integration at the micro-scale can be roughly divided into two classes; methods interpolating the integrand and cubature methods. The empirical interpolation method (high-performance reduced order modeling) and the empirical cubature method are assessed in terms of their accuracy in approximating the full-order result. A micro-structural volume element is therefore considered, subjected to four load-cases, including cyclic and path-dependent loading. The differences in approximating the micro- and macroscopic quantities of interest are highlighted, e.g. micro-fluctuations and stresses. Algorithmic speed-ups for both methods with respect to the full-order micro-structural model are quantified. The pros and cons of both classes are thereby clearly identified.
Uncertainty and sensitivity analysis of environmental transport models
International Nuclear Information System (INIS)
Margulies, T.S.; Lancaster, L.E.
1985-01-01
An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables
A 1024 channel analyser of model FH 465
International Nuclear Information System (INIS)
Tang Cunxun
1988-01-01
The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described
Global sensitivity analysis of thermomechanical models in modelling of welding
International Nuclear Information System (INIS)
Petelet, M.
2008-01-01
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Energy Technology Data Exchange (ETDEWEB)
Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)
1996-07-01
The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.
International Nuclear Information System (INIS)
Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.
1996-07-01
The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document
Earth system sensitivity inferred from Pliocene modelling and data
Lunt, D.J.; Haywood, A.M.; Schmidt, G.A.; Salzmann, U.; Valdes, P.J.; Dowsett, H.J.
2010-01-01
Quantifying the equilibrium response of global temperatures to an increase in atmospheric carbon dioxide concentrations is one of the cornerstones of climate research. Components of the Earths climate system that vary over long timescales, such as ice sheets and vegetation, could have an important effect on this temperature sensitivity, but have often been neglected. Here we use a coupled atmosphere-ocean general circulation model to simulate the climate of the mid-Pliocene warm period (about three million years ago), and analyse the forcings and feedbacks that contributed to the relatively warm temperatures. Furthermore, we compare our simulation with proxy records of mid-Pliocene sea surface temperature. Taking these lines of evidence together, we estimate that the response of the Earth system to elevated atmospheric carbon dioxide concentrations is 30-50% greater than the response based on those fast-adjusting components of the climate system that are used traditionally to estimate climate sensitivity. We conclude that targets for the long-term stabilization of atmospheric greenhouse-gas concentrations aimed at preventing a dangerous human interference with the climate system should take into account this higher sensitivity of the Earth system. ?? 2010 Macmillan Publishers Limited. All rights reserved.
Falkenberg, Katrina J; Gould, Cathryn M; Johnstone, Ricky W; Simpson, Kaylene J
2014-01-01
Identification of mechanisms of resistance to histone deacetylase inhibitors, such as vorinostat, is important in order to utilise these anticancer compounds more efficiently in the clinic. Here, we present a dataset containing multiple tiers of stringent siRNA screening for genes that when knocked down conferred sensitivity to vorinostat-induced cell death. We also present data from a miRNA overexpression screen for miRNAs contributing to vorinostat sensitivity. Furthermore, we provide transcriptomic analysis using massively parallel sequencing upon knockdown of 14 validated vorinostat-resistance genes. These datasets are suitable for analysis of genes and miRNAs involved in cell death in the presence and absence of vorinostat as well as computational biology approaches to identify gene regulatory networks.
Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size
ALWI, IDRUS
2011-01-01
The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size, 200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...
Multi-state models: metapopulation and life history analyses
Directory of Open Access Journals (Sweden)
Arnason, A. N.
2004-06-01
Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such
The Sensitivity of Evapotranspiration Models to Errors in Model ...
African Journals Online (AJOL)
Three levels of sensitivity, herein termed sensitivity, ratings, were established, namely: Highly Sensitive (Rating:1); Moderately sensitive' (Rating:2); and 'not too sensitive'(Rating: 3). The ratings were based on the amount of error in the measured parameter to introduce + 10% relative error in the predicted Et. The level of ...
Comparison between tests and analyses for ground-foundation models
International Nuclear Information System (INIS)
Moriyama, Ken-ichi; Hibino, Hirosi; Izumi, Masanori; Kiya, Yukiharu.
1991-01-01
The laboratory tests were carried out on two ground models made of silicone rubber (hard and soft ground models) and a foundation model made of aluminum in order to confirm the embedment effects on soil-structure interaction system experimentally. The detail of the procedure and the results of the test are described in the companion paper. Up till now, the analytical studies on the embedment effect on seismic response of buildings have been performed in recent years and the analysis tools have been used in the seismic design procedure of the nuclear power plant facilities. The embedment effects on soil-structure interaction system are confirmed by the simulation analysis and the verification of analysis tools are investigated through the simulation analysis in this paper. The following conclusions can be drawn from comparison between laboratory test results and analysis results. (1) The effects of embedment, such as increase in the impedance functions and the rotational component of foundation input motions, were clarified by the simulation analyses and laboratory tests. (2) The analysis results of axisymmetric FEM showed good agreement with processed test results by means of the transient response to eliminate the reflected waves and the analysis tools were confirmed experimentally. (3) The excavated portion of the soil affected the foundation input motion rather than the impedance function since there was little difference between the impedance functions obtained by wave propagation theory and those obtained by the axisymmetric FEM and the rotational component of the foundation input motions increased significantly. (J.P.N.)
Alduraywish, S A; Lodge, C J; Campbell, B; Allen, K J; Erbas, B; Lowe, A J; Dharmage, S C
2016-01-01
There is growing evidence for an increase in food allergies. The question of whether early life food sensitization, a primary step in food allergies, leads to other allergic disease is a controversial but important issue. Birth cohorts are an ideal design to answer this question. We aimed to systematically investigate and meta-analyse the evidence for associations between early food sensitization and allergic disease in birth cohorts. MEDLINE and SCOPUS databases were searched for birth cohorts that have investigated the association between food sensitization in the first 2 years and subsequent wheeze/asthma, eczema and/or allergic rhinitis. We performed meta-analyses using random-effects models to obtain pooled estimates, stratified by age group. The search yielded fifteen original articles representing thirteen cohorts. Early life food sensitization was associated with an increased risk of infantile eczema, childhood wheeze/asthma, eczema and allergic rhinitis and young adult asthma. Meta-analyses demonstrated that early life food sensitization is related to an increased risk of wheeze/asthma (pooled OR 2.9; 95% CI 2.0-4.0), eczema (pooled OR 2.7; 95% CI 1.7-4.4) and allergic rhinitis (pooled OR 3.1; 95% CI 1.9-4.9) from 4 to 8 years. Food sensitization in the first 2 years of life can identify children at high risk of subsequent allergic disease who may benefit from early life preventive strategies. However, due to potential residual confounding in the majority of studies combined with lack of follow-up into adolescence and adulthood, further research is needed. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Scoping and sensitivity analyses for the Demonstration Tokamak Hybrid Reactor (DTHR)
International Nuclear Information System (INIS)
Sink, D.A.; Gibson, G.
1979-03-01
The results of an extensive set of parametric studies are presented which provide analytical data of the effects of various tokamak parameters on the performance and cost of the DTHR (Demonstration Tokamak Hybrid Reactor). The studies were centered on a point design which is described in detail. Variations in the device size, neutron wall loading, and plasma aspect ratio are presented, and the effects on direct hardware costs, fissile fuel production (breeding), fusion power production, electrical power consumption, and thermal power production are shown graphically. The studies considered both ignition and beam-driven operations of DTHR and yielded results based on two empirical scaling laws presently used in reactor studies. Sensitivity studies were also made for variations in the following key parameters: the plasma elongation, the minor radius, the TF coil peak field, the neutral beam injection power, and the Z/sub eff/ of the plasma
Particle transport model sensitivity on wave-induced processes
Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna
2017-04-01
Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.
A theoretical model for analysing gender bias in medicine
Directory of Open Access Journals (Sweden)
Johansson Eva E
2009-08-01
Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.
A theoretical model for analysing gender bias in medicine.
Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina
2009-08-03
During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.
Energy Technology Data Exchange (ETDEWEB)
Behler, Matthias; Bock, Matthias; Stuke, Maik; Wagner, Markus
2014-06-15
This work describes statistical analyses based on Monte Carlo sampling methods for criticality safety analyses. The methods analyse a large number of calculations of a given problem with statistically varied model parameters to determine uncertainties and sensitivities of the computed results. The GRS development SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is a modular, easily extensible abstract interface program, designed to perform such Monte Carlo sampling based uncertainty and sensitivity analyses in the field of criticality safety. It couples different criticality and depletion codes commonly used in nuclear criticality safety assessments to the well-established GRS tool SUSA for sensitivity and uncertainty analyses. For uncertainty analyses of criticality calculations, SunCISTT couples various SCALE sequences developed at Oak Ridge National Laboratory and the general Monte Carlo N-particle transport code MCNP from Los Alamos National Laboratory to SUSA. The impact of manufacturing tolerances of a fuel assembly configuration on the neutron multiplication factor for the various sequences is shown. Uncertainties in nuclear inventories, dose rates, or decay heat can be investigated via the coupling of the GRS depletion system OREST to SUSA. Some results for a simplified irradiated Pressurized Water Reactor (PWR) UO{sub 2} fuel assembly are shown. SUnCISTT also combines the two aforementioned modules for burnup credit criticality analysis of spent nuclear fuel to ensures an uncertainty and sensitivity analysis using the variations of manufacturing tolerances in the burn-up code and criticality code simultaneously. Calculations and results for a storage cask loaded with typical irradiated PWR UO{sub 2} fuel are shown, including Monte Carlo sampled axial burn-up profiles. The application of SUnCISTT in the field of code validation, specifically, how it is applied to compare a simulation model to available benchmark
Energy Technology Data Exchange (ETDEWEB)
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.
Uncertainty propagation on fuel cycle codes: Monte Carlo vs Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
García Martínez, M.; Alvarez-Velarde, F.
2015-07-01
Uncertainty propagation on fuel cycle calculations is usually limited by parametric restrictions that only allow the study of small sets of linearly correlated input and output parameters. A Monte Carlo tool has been developed in order to be able to address the simultaneous impact of several magnitudes’ uncertainties in the final results, no matter the relationship between them. TR{sub E}VOL code has been updated and optimized in order to be able to run a significant number of perturbed samples of the same reference scenario. Both a Sensitivity Analysis and a Monte Carlo technique have been implemented in the code. The first aims to address the contribution of each input parameter on the output magnitudes, while the second one is intended to provide a better estimation of the global uncertainty when non-linear relations do not allow such approach. These two methodologies have been applied to the study of a series of scenarios developed from a OECD/NEA study, which is of particular interest for Europe. The results are presented in terms of materials’ mass according to their total accumulated value, final value or maximum reached value, as defined by the user. These results are given as mean values and their uncertainties as the standard deviation of the samples. Non-linear effects can be seen as biases that affect the shape of the results’ Gaussian curves. (Author)
Oral sensitization to food proteins: A Brown Norway rat model
Knippels, L.M.J.; Penninks, A.H.; Spanhaak, S.; Houben, G.F.
1998-01-01
Background: Although several in vivo antigenicity assays using parenteral immunization are operational, no adequate enteral sensitization models are available to study food allergy and allergenicity of food proteins. Objective: This paper describes the development of an enteral model for food
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
Sensitivity analysis is a tool used in the assessment of a model's performance. This study examined the application of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study area is Effurun, Uvwie Local Government Area of Delta State of Nigeria. In order to ...
Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François
2017-03-01
Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.
Multi-Objective Sensitivity Analyses for Power Generation Mix: Malaysia Case Study
Siti Mariam Mohd Shokri; Nofri Yenita Dahlan; Hasmaini Mohamad
2017-01-01
This paper presents an optimization framework to determine long-term optimal generation mix for Malaysia Power Sector using Dynamic Programming (DP) technique. Several new candidate units with a pre-defined MW capacity were included in the model for generation expansion planning from coal, natural gas, hydro and renewable energy (RE). Four objective cases were considered, 1) economic cost, 2) environmental, 3) reliability and 4) multi-objectives that combining the three cases. Results show th...
Sensitivity Analysis of the Bone Fracture Risk Model
Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane
2017-01-01
environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
A Sensitivity Study of the Navier-Stokes- α Model
Breckling, Sean; Neda, Monika
2017-11-01
We present a sensitivity study of the of the Navier Stokes- α model (NS α) with respect to perturbations of the differential filter length α. Parameter-sensitivity is evaluated using the sensitivity equations method. Once formulated, the sensitivity equations are discretized and computed alongside the NS α model using the same finite elements in space, and Crank-Nicolson in time. We provide a complete stability analysis of the scheme, along with the sensitivity results of several benchmark problems in both 2D and 3D. We further demonstrate a practical technique to determine the reliability of the NS α model in problem-specific settings. Lastly, we investigate the sensitivity and reliability of important functionals of the velocity and pressure solutions.
Impact of sophisticated fog spray models on accident analyses
International Nuclear Information System (INIS)
Roblyer, S.P.; Owzarski, P.C.
1978-01-01
The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated
Sensitivity of SBLOCA analysis to model nodalization
International Nuclear Information System (INIS)
Lee, C.; Ito, T.; Abramson, P.B.
1983-01-01
The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery
Models of Compensation (MODCOMP): Policy Analyses and Unemployment Effects
National Research Council Canada - National Science Library
Golan, Amos; Blackstone, Tanja F; Cashbagh, David M
2008-01-01
The overall objective of this research is to analyze the impact of wage and bonus increases on enlisted personnel as well as personnel behavior over time and sensitivity to the macroeconomic conditions...
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard
show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher...... discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging...
Yao, Lijun; Lyu, Ning; Chen, Jiemei; Pan, Tao; Yu, Jing
2016-04-01
The development of a small, dedicated near-infrared (NIR) spectrometer has promising potential applications, such as for joint analyses of total cholesterol (TC) and triglyceride (TG) in human serum for preventing and treating hyperlipidemia of a large population. The appropriate wavelength selection is a key technology for developing such a spectrometer. For this reason, a novel wavelength selection method, named the equidistant combination partial least squares (EC-PLS), was applied to the wavelength selection for the NIR analyses of TC and TG in human serum. A rigorous process based on the various divisions of calibration and prediction sets was performed to achieve modeling optimization with stability. By applying EC-PLS, a model set was developed, which consists of various models that were equivalent to the optimal model. The joint analyses model of the two indicators was further selected with only 50 wavelengths. The random validation samples excluded from the modeling process were used to validate the selected model. The root-mean-square errors, correlation coefficients and ratio of performance to deviation for the prediction were 0.197 mmol L- 1, 0.985 and 5.6 for TC, and 0.101 mmol L- 1, 0.992 and 8.0 for TG, respectively. The sensitivity and specificity for hyperlipidemia were 96.2% and 98.0%. These findings indicate high prediction accuracy and low model complexity. The proposed wavelength selection provided valuable references for the designing of a small, dedicated spectrometer for hyperlipidemia. The methodological framework and optimization algorithm are universal, such that they can be applied to other fields.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, P.M.; Madsen, Kristoffer H; Lund, T.E.
on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...... show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher...
Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models
Jones, William T.; Lazzara, David; Haimes, Robert
2010-01-01
The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.
International Nuclear Information System (INIS)
Nichols, W.E.; Freshley, M.D.
1991-10-01
This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
Modeling retinal high and low contrast sensitivity filters
Lourens, T; Mira, J; Sandoval, F
1995-01-01
In this paper two types of ganglion cells in the visual system of mammals (monkey) are modeled. A high contrast sensitive type, the so called M-cells, which project to the two magno-cellular layers of the lateral geniculate nucleus (LGN) and a low sensitive type, the P-cells, which project to the
Climate stability and sensitivity in some simple conceptual models
Energy Technology Data Exchange (ETDEWEB)
Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)
2012-02-15
A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1991-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab
Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling
Energy Technology Data Exchange (ETDEWEB)
Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)
2015-02-17
We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.
Energy Technology Data Exchange (ETDEWEB)
Suolanen, V.; Ilvonen, M. [VTT Energy, Espoo (Finland). Nuclear Energy
1998-10-01
Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m{sup 2}). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an
Sensitivity study of reduced models of the activated sludge process ...
African Journals Online (AJOL)
2009-08-07
Aug 7, 2009 ... order to fit the reduced model behaviour to the real data for the process behaviour. Keywords: wastewater treatment, activated sludge process, reduced model, model parameters, sensitivity function, Matlab simulation. Introduction. The problem of effective and optimal control of wastewater treatment plants ...
Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)
1993-12-01
This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.
Integrated Process Model Development and Systems Analyses for the LIFE Power Plant
Energy Technology Data Exchange (ETDEWEB)
Meier, W R; Anklam, T; Abbott, R; Erlandson, A; Halsey, W; Miles, R; Simon, A J
2009-07-15
We have developed an integrated process model (IPM) for a Laser Inertial Fusion-Fission Energy (LIFE) power plant. The model includes cost and performance algorithms for the major subsystems of the plant, including the laser, fusion target fabrication and injection, fusion-fission chamber (including the tritium and fission fuel blankets), heat transfer and power conversion systems, and other balance of plant systems. The model has been developed in Visual Basic with an Excel spreadsheet user interface in order to allow experts in various aspects of the design to easily integrate their individual modules and provide a convenient, widely accessible platform for conducting the system studies. Subsystem modules vary in level of complexity; some are based on top-down scaling from fission power plant costs (for example, electric plant equipment), while others are bottom-up models based on conceptual designs being developed by LLNL (for example, the fusion-fission chamber and laser systems). The IPM is being used to evaluate design trade-offs, do design optimization, and conduct sensitivity analyses to identify high-leverage areas for R&D. We describe key aspects of the IPM and report on the results of our systems analyses. Designs are compared and evaluated as a function of key design variables such as fusion target yield and pulse repetition rate.
Sex and smoking sensitive model of radon induced lung cancer
International Nuclear Information System (INIS)
Zhukovsky, M.; Yarmoshenko, I.
2006-01-01
Radon and radon progeny inhalation exposure are recognized to cause lung cancer. Only strong evidence of radon exposure health effects was results of epidemiological studies among underground miners. Any single epidemiological study among population failed to find reliable lung cancer risk due to indoor radon exposure. Indoor radon induced lung cancer risk models were developed exclusively basing on extrapolation of miners data. Meta analyses of indoor radon and lung cancer case control studies allowed only little improvements in approaches to radon induced lung cancer risk projections. Valuable data on characteristics of indoor radon health effects could be obtained after systematic analysis of pooled data from single residential radon studies. Two such analyses are recently published. Available new and previous data of epidemiological studies of workers and general population exposed to radon and other sources of ionizing radiation allow filling gaps in knowledge of lung cancer association with indoor radon exposure. The model of lung cancer induced by indoor radon exposure is suggested. The key point of this model is the assumption that excess relative risk depends on both sex and smoking habits of individual. This assumption based on data on occupational exposure by radon and plutonium and also on the data on external radiation exposure in Hiroshima and Nagasaki and the data on external exposure in Mayak nuclear facility. For non-corrected data of pooled European and North American studies the increased sensitivity of females to radon exposure is observed. The mean value of ks for non-corrected data obtained from independent source is in very good agreement with the L.S.S. study and Mayak plutonium workers data. Analysis of corrected data of pooled studies showed little influence of sex on E.R.R. value. The most probable cause of such effect is the change of men/women and smokers/nonsmokers ratios in corrected data sets in North American study. More correct
Hornberger, G. M.; Rastetter, E. B.
1982-01-01
A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.
Improved analyses using function datasets and statistical modeling
John S. Hogland; Nathaniel M. Anderson
2014-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...
An electrodynamic model to analyse field emission thrusters
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E.; Del Zoppo, R.; Venturini, G.
1987-12-01
After a short description of the working principle of field emission thrusters, a surface emission electrodynamic model, capable of describing the required propulsive effects, is shown. The model, developed according to cylindrical geometry, provides one-dimensional differential relations and, therefore, easy resolution. The characteristic curves obtained are graphed. Comparison with experimental data confirms the validity of the proposed model.
Directory of Open Access Journals (Sweden)
Cai-Jun Wu
2015-01-01
Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.
International Nuclear Information System (INIS)
Amorim, E.S. do; Castro Lobo, P.D. de.
1980-11-01
A reduction of computing effort was achieved as a result of the application of space - independent continuous slowing down theory in the spectrum averaged cross sections and further expressing then in a quadratic corelation whith the temperature and the composition. The decoupling between variables that express some of the important nuclear characteristics allowed to introduce a sensitivity analyses treatment for the full prediction of the behavior, over the fuel cycle, of the LMFBR considered. As a potential application of the method here in developed is to predict the nuclear characteristics of another reactor, face some reference reactor of the family considered. Excellent agreement with exact calculation is observed only when perturbations occur in nuclear data and/or fuel isotopic characteristics, but fair results are obtained whith variations in system components other than the fuel. (Author) [pt
International Nuclear Information System (INIS)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-01
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
Energy Technology Data Exchange (ETDEWEB)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-15
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
Model dependence of isospin sensitive observables at high densities
International Nuclear Information System (INIS)
Guo, Wen-Mei; Yong, Gao-Chan; Wang, Yongjia; Li, Qingfeng; Zhang, Hongfei; Zuo, Wei
2013-01-01
Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π − /π + ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π − /π + ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically
A didactic Input-Output model for territorial ecology analyses
Garry Mcdonald
2010-01-01
This report describes a didactic input-output modelling framework created jointly be the team at REEDS, Universite de Versailles and Dr Garry McDonald, Director, Market Economics Ltd. There are three key outputs associated with this framework: (i) a suite of didactic input-output models developed in Microsoft Excel, (ii) a technical report (this report) which describes the framework and the suite of models1, and (iii) a two week intensive workshop dedicated to the training of REEDS researcher...
Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.
Kooi, B W; Poggiale, J C
2018-04-20
Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Analysing the Linux kernel feature model changes using FMDiff
Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.
Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The
Analysing the Linux kernel feature model changes using FMDiff
Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.
2015-01-01
Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The
Analysing Models as a Knowledge Technology in Transport Planning
DEFF Research Database (Denmark)
Gudmundsson, Henrik
2011-01-01
critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...
Sensitivity-based research prioritization through stochastic characterization modeling
DEFF Research Database (Denmark)
Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter
2018-01-01
to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according......Product developers using life cycle toxicity characterization models to understand the potential impacts of chemical emissions face serious challenges related to large data demands and high input data uncertainty. This motivates greater focus on model sensitivity toward input parameter variability...... to parameter influence on characterization factors (CFs). Proof of concept is illustrated with the UNEP-SETAC scientific consensus model USEtox....
GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES
International Nuclear Information System (INIS)
Hansen, P.N.
2004-01-01
This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Modelling flow through unsaturated zones: Sensitivity to unsaturated ...
Indian Academy of Sciences (India)
A numerical model to simulate moisture ﬂow through unsaturated zones is developed using the ﬁnite element method, and is validated by comparing the model results with those available in the literature. The sensitivities of different processes such as gravity drainage and inﬁltration to the variations in the unsaturated soil ...
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Modelling flow through unsaturated zones: Sensitivity to unsaturated ...
Indian Academy of Sciences (India)
M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22
MS received 13 October 1997; revised 20 November 2001. Abstract. A numerical model to simulate moisture flow through unsaturated zones is developed using the finite element method, and is validated by comparing the model results with those available in the literature. The sensitivities of different processes such as ...
Quantifying uncertainty and sensitivity in sea ice models
Energy Technology Data Exchange (ETDEWEB)
Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-15
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
Sensitive analysis of a finite element model of orthogonal cutting
Brocail, J.; Watremez, M.; Dubar, L.
2011-01-01
This paper presents a two-dimensional finite element model of orthogonal cutting. The proposed model has been developed with Abaqus/explicit software. An Arbitrary Lagrangian-Eulerian (ALE) formulation is used to predict chip formation, temperature, chip-tool contact length, chip thickness, and cutting forces. This numerical model of orthogonal cutting will be validated by comparing these process variables to experimental and numerical results obtained by Filice et al. [1]. This model can be considered to be reliable enough to make qualitative analysis of entry parameters related to cutting process and frictional models. A sensitivity analysis is conducted on the main entry parameters (coefficients of the Johnson-Cook law, and contact parameters) with the finite element model. This analysis is performed with two levels for each factor. The sensitivity analysis realised with the numerical model on the entry parameters has allowed the identification of significant parameters and the margin identification of parameters.
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Plasma-safety assessment model and safety analyses of ITER
International Nuclear Information System (INIS)
Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.
2001-01-01
A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)
Modeling theoretical uncertainties in phenomenological analyses for particle physics
Energy Technology Data Exchange (ETDEWEB)
Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)
2017-04-15
The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)
Sensitivity Analysis of the TRIGA IPR-R1 Reactor Models Using the MCNP Code
Directory of Open Access Journals (Sweden)
C. A. M. Silva
2014-01-01
Full Text Available In the process of verification and validation of code modelling, the sensitivity analysis including systematic variations in code input variables must be used to help identifying the relevant parameters necessary for a determined type of analysis. The aim of this work is to identify how much the code results are affected by two different types of the TRIGA IPR-R1 reactor modelling processes performed using the MCNP (Monte Carlo N-Particle Transport code. The sensitivity analyses included small differences of the core and the rods dimensions and different levels of model detailing. Four models were simulated and neutronic parameters such as effective multiplication factor (keff, reactivity (ρ, and thermal and total neutron flux in central thimble in some different conditions of the reactor operation were analysed. The simulated models presented good agreement between them, as well as in comparison with available experimental data. In this way, the sensitivity analyses demonstrated that simulations of the TRIGA IPR-R1 reactor can be performed using any one of the four investigated MCNP models to obtain the referenced neutronic parameters.
Compound dislocation models (CDMs) for volcano deformation analyses
Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau
2017-02-01
Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.
Energy Technology Data Exchange (ETDEWEB)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
2017-08-01
In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.
Model analyses for sustainable energy supply under CO2 restrictions
International Nuclear Information System (INIS)
Matsuhashi, Ryuji; Ishitani, Hisashi.
1995-01-01
This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives
Vegetable parenting practices scale: Item response modeling analyses
Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...
A Hamiltonian approach to model and analyse networks of ...
Indian Academy of Sciences (India)
2015-09-24
Sep 24, 2015 ... Over the past twelve years, ideas and methods from nonlinear dynamics system theory, in particular, group theoretical methods in bifurcation theory, have been ... In this manuscript, a review of the most recent work on modelling and analysis of two seemingly different systems, an array of gyroscopes and an ...
Gene Discovery and Functional Analyses in the Model Plant Arabidopsis
DEFF Research Database (Denmark)
Feng, Cai-ping; Mundy, J.
2006-01-01
The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...
Capacity allocation in wireless communication networks - models and analyses
Litjens, Remco
2003-01-01
This monograph has concentrated on capacity allocation in cellular and Wireless Local Area Networks, primarily with a network operator’s perspective. In the introduc- tory chapter, a reference model has been proposed for the extensive suite of capacity allocation mechanisms that can be applied at
Theoretical modeling and experimental analyses of laminated wood composite poles
Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse
2005-01-01
Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Complex accident scenarios modelled and analysed by Stochastic Petri Nets
International Nuclear Information System (INIS)
Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš
2015-01-01
This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)
A Formal Model to Analyse the Firewall Configuration Errors
Directory of Open Access Journals (Sweden)
T. T. Myo
2015-01-01
Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP
The carbohydrate sensitive rat as a model of obesity.
Directory of Open Access Journals (Sweden)
Nachiket A Nadkarni
Full Text Available BACKGROUND: Sensitivity to obesity is highly variable in humans, and rats fed a high fat diet (HFD are used as a model of this inhomogeneity. Energy expenditure components (basal metabolism, thermic effect of feeding, activity and variations in substrate partitioning are possible factors underlying the variability. Unfortunately, in rats as in humans, results have often been inconclusive and measurements usually made after obesity onset, obscuring if metabolism was a cause or consequence. Additionally, the role of high carbohydrate diet (HCD has seldom been studied. METHODOLOGY/FINDINGS: Rats (n=24 were fed for 3 weeks on HCD and then 3 weeks on HFD. Body composition was tracked by MRI and compared to energy expenditure components measured prior to obesity. RESULTS: 1 under HFD, as expected, by adiposity rats were variable enough to be separable into relatively fat resistant (FR and sensitive (FS groups, 2 under HCD, and again by adiposity, rats were also variable enough to be separable into carbohydrate resistant (CR and sensitive (CS groups, the normal body weight of CS rats hiding viscerally-biased fat accumulation, 3 HCD adiposity sensitivity was not related to that under HFD, and both HCD and HFD adiposity sensitivities were not related to energy expenditure components (BMR, TEF, activity cost, and 4 only carbohydrate to fat partitioning in response to an HCD test meal was related to HCD-induced adiposity. CONCLUSIONS/SIGNIFICANCE: The rat model of human obesity is based on substantial variance in adiposity gains under HFD (FR/FS model. Here, since we also found this phenomenon under HCD, where it was also linked to an identifiable metabolic difference, we should consider the existence of another model: the carbohydrate resistant (CR or sensitive (CS rat. This new model is potentially complementary to the FR/FS model due to relatively greater visceral fat accumulation on a low fat high carbohydrate diet.
Analyses of homologous rotavirus infection in the mouse model.
Burns, J W; Krishnaney, A A; Vo, P T; Rouse, R V; Anderson, L J; Greenberg, H B
1995-02-20
The group A rotaviruses are significant human and veterinary pathogens in terms of morbidity, mortality, and economic loss. Despite its importance, an effective vaccine remains elusive due at least in part to our incomplete understanding of rotavirus immunity and protection. Both large and small animal model systems have been established to address these issues. One significant drawback of these models is the lack of well-characterized wild-type homologous viruses and their cell culture-adapted variants. We have characterized four strains of murine rotaviruses, EC, EHP, EL, and EW, in the infant and adult mouse model using wild-type isolates and cell culture-adapted variants of each strain. Wild-type murine rotaviruses appear to be equally infectious in infant and adult mice in terms of the intensity and duration of virus shedding following primary infection. Spread of infection to naive cagemates is seen in both age groups. Clearance of shedding following primary infection appears to correlate with the development of virus-specific intestinal IgA. Protective immunity is developed in both infant and adult mice following oral infection as demonstrated by a lack of shedding after subsequent wild-type virus challenge. Cell culture-adapted murine rotaviruses appear to be highly attenuated when administered to naive animals and do not spread efficiently to nonimmune cagemates. The availability of these wild-type and cell culture-adapted virus preparations should allow a more systematic evaluation of rotavirus infection and immunity. Furthermore, future vaccine strategies can be evaluated in the mouse model using several fully virulent homologous viruses for challenge.
Analysing the Competency of Mathematical Modelling in Physics
Redish, Edward F.
2016-01-01
A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unso...
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Sensitivity of a Simulated Derecho Event to Model Initial Conditions
Wang, Wei
2014-05-01
Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.
A workflow model to analyse pediatric emergency overcrowding.
Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie
2014-01-01
The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).
Genomic, Biochemical, and Modeling Analyses of Asparagine Synthetases from Wheat
Directory of Open Access Journals (Sweden)
Hongwei Xu
2018-01-01
Full Text Available Asparagine synthetase activity in cereals has become an important issue with the discovery that free asparagine concentration determines the potential for formation of acrylamide, a probably carcinogenic processing contaminant, in baked cereal products. Asparagine synthetase catalyses the ATP-dependent transfer of the amino group of glutamine to a molecule of aspartate to generate glutamate and asparagine. Here, asparagine synthetase-encoding polymerase chain reaction (PCR products were amplified from wheat (Triticum aestivum cv. Spark cDNA. The encoded proteins were assigned the names TaASN1, TaASN2, and TaASN3 on the basis of comparisons with other wheat and cereal asparagine synthetases. Although very similar to each other they differed slightly in size, with molecular masses of 65.49, 65.06, and 66.24 kDa, respectively. Chromosomal positions and scaffold references were established for TaASN1, TaASN2, and TaASN3, and a fourth, more recently identified gene, TaASN4. TaASN1, TaASN2, and TaASN4 were all found to be single copy genes, located on chromosomes 5, 3, and 4, respectively, of each genome (A, B, and D, although variety Chinese Spring lacked a TaASN2 gene in the B genome. Two copies of TaASN3 were found on chromosome 1 of each genome, and these were given the names TaASN3.1 and TaASN3.2. The TaASN1, TaASN2, and TaASN3 PCR products were heterologously expressed in Escherichia coli (TaASN4 was not investigated in this part of the study. Western blot analysis identified two monoclonal antibodies that recognized the three proteins, but did not distinguish between them, despite being raised to epitopes SKKPRMIEVAAP and GGSNKPGVMNTV in the variable C-terminal regions of the proteins. The heterologously expressed TaASN1 and TaASN2 proteins were found to be active asparagine synthetases, producing asparagine and glutamate from glutamine and aspartate. The asparagine synthetase reaction was modeled using SNOOPY® software and information from
Kinoshita, Ikuo; Torige, Toshihide; Yamada, Minoru
2014-06-01
In the case of total failure of the high pressure injection (HPI) system following small break loss of coolant accident (SBLOCA) in pressurized water reactor (PWR), the break size is so small that the primary system does not depressurize to the accumulator (ACC) injection pressure before the core is uncovered extensively. Therefore, steam generator (SG) secondary-side depressurization is necessary as an accident management in order to grant accumulator system actuation and core reflood. A thermal-hydraulic analysis using RELAP5/MOD3 was made on SBLOCA with HPI-failure for Oi Units 3/4 operated by Kansai Electoric Power Co., which are conventional 4 loop PWR plants. The effectiveness of SG secondary-side depressurization procedure was investigated for the real plant design and operational characteristics. The sensitivity analyses using RELAP5/MOD3.2 showed that the accident management was effective for a wide range of break sizes, various orientations and positions. The critical break can be 3 inch cold-leg bottom break.
Exploring sensitivity of a multistate occupancy model to inform management decisions
Green, A.W.; Bailey, L.L.; Nichols, J.D.
2011-01-01
Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate
Sensitivity of wildlife habitat models to uncertainties in GIS data
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
Is Convection Sensitive to Model Vertical Resolution and Why?
Xie, S.; Lin, W.; Zhang, G. J.
2017-12-01
Model sensitivity to horizontal resolutions has been studied extensively, whereas model sensitivity to vertical resolution is much less explored. In this study, we use the US Department of Energy (DOE)'s Accelerated Climate Modeling for Energy (ACME) atmosphere model to examine the sensitivity of clouds and precipitation to the increase of vertical resolution of the model. We attempt to understand what results in the behavior change (if any) of convective processes represented by the unified shallow and turbulent scheme named CLUBB (Cloud Layers Unified by Binormals) and the Zhang-McFarlane deep convection scheme in ACME. A short-term hindcast approach is used to isolate parameterization issues from the large-scale circulation. The analysis emphasizes on how the change of vertical resolution could affect precipitation partitioning between convective- and grid-scale as well as the vertical profiles of convection-related quantities such as temperature, humidity, clouds, convective heating and drying, and entrainment and detrainment. The goal is to provide physical insight into potential issues with model convective processes associated with the increase of model vertical resolution. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Sensitivity Analysis of Launch Vehicle Debris Risk Model
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Sensitivity analysis techniques for models of human behavior.
Energy Technology Data Exchange (ETDEWEB)
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
A Culture-Sensitive Agent in Kirman's Ant Model
Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu
The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
Sensitivity analysis of physiochemical interaction model: which pair ...
African Journals Online (AJOL)
The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a deterministic system of first order ordinary differential equations. In this paper, we considered a sensitivity analysis of studying the qualitative ...
A sensitive venous bleeding model in haemophilia A mice
DEFF Research Database (Denmark)
Pastoft, Anne Engedahl; Lykkesfeldt, Jens; Ezban, M.
2012-01-01
for evaluation of pro-coagulant compounds for treatment of haemophilia. Interestingly, the vena saphena model proved to be sensitive towards FVIII in plasma levels that approach the levels preventing bleeding in haemophilia patients, and may, thus, in particular be valuable for testing of new long...
A model for perception-based identification of sensitive skin
Richters, R.J.H.; Uzunbajakava, N.E.; Hendriks, J.C.; Bikker, J.W.; Erp, P.E.J. van; Kerkhof, P.C.M. van de
2017-01-01
BACKGROUND: With high prevalence of sensitive skin (SS), lack of strong evidence on pathomechanisms, consensus on associated symptoms, proof of existence of 'general' SS and tools to recruit subjects, this topic attracts increasing attention of research. OBJECTIVE: To create a model for selecting
Culturally Sensitive Dementia Caregiving Models and Clinical Practice
Daire, Andrew P.; Mitcham-Smith, Michelle
2006-01-01
Family caregiving for individuals with dementia is an increasingly complex issue that affects the caregivers' and care recipients' physical, mental, and emotional health. This article presents 3 key culturally sensitive caregiver models along with clinical interventions relevant for mental health counseling professionals.
INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.
Elkantassi, Soumaya
2017-10-03
Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.
A non-human primate model for gluten sensitivity.
Directory of Open Access Journals (Sweden)
Michael T Bethune
2008-02-01
Full Text Available Gluten sensitivity is widespread among humans. For example, in celiac disease patients, an inflammatory response to dietary gluten leads to enteropathy, malabsorption, circulating antibodies against gluten and transglutaminase 2, and clinical symptoms such as diarrhea. There is a growing need in fundamental and translational research for animal models that exhibit aspects of human gluten sensitivity.Using ELISA-based antibody assays, we screened a population of captive rhesus macaques with chronic diarrhea of non-infectious origin to estimate the incidence of gluten sensitivity. A selected animal with elevated anti-gliadin antibodies and a matched control were extensively studied through alternating periods of gluten-free diet and gluten challenge. Blinded clinical and histological evaluations were conducted to seek evidence for gluten sensitivity.When fed with a gluten-containing diet, gluten-sensitive macaques showed signs and symptoms of celiac disease including chronic diarrhea, malabsorptive steatorrhea, intestinal lesions and anti-gliadin antibodies. A gluten-free diet reversed these clinical, histological and serological features, while reintroduction of dietary gluten caused rapid relapse.Gluten-sensitive rhesus macaques may be an attractive resource for investigating both the pathogenesis and the treatment of celiac disease.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Shek, Daniel T. L.; Ma, Cecilia M. S.
2011-01-01
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documen...
Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models
International Nuclear Information System (INIS)
Lamboni, Matieyendou; Monod, Herve; Makowski, David
2011-01-01
Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.
Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan
Li, Y.; Tang, C.; Xu, L.; Ye, S.
2017-07-01
Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.
About the use of rank transformation in sensitivity analysis of model output
International Nuclear Information System (INIS)
Saltelli, Andrea; Sobol', Ilya M
1995-01-01
Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis
Chowdhary, Jacek; Cairns, Brian; Waquet, Fabien; Knobelspiesse, Kirk; Ottaviani, Matteo; Redemann, Jens; Travis, Larry; Mishchenko, Michael
2012-01-01
For remote sensing of aerosol over the ocean, there is a contribution from light scattered underwater. The brightness and spectrum of this light depends on the biomass content of the ocean, such that variations in the color of the ocean can be observed even from space. Rayleigh scattering by pure sea water, and Rayleigh-Gans type scattering by plankton, causes this light to be polarized with a distinctive angular distribution. To study the contribution of this underwater light polarization to multiangle, multispectral observations of polarized reflectance over ocean, we previously developed a hydrosol model for use in underwater light scattering computations that produces realistic variations of the ocean color and the underwater light polarization signature of pure sea water. In this work we review this hydrosol model, include a correction for the spectrum of the particulate scattering coefficient and backscattering efficiency, and discuss its sensitivity to variations in colored dissolved organic matter (CDOM) and in the scattering function of marine particulates. We then apply this model to measurements of total and polarized reflectance that were acquired over open ocean during the MILAGRO field campaign by the airborne Research Scanning Polarimeter (RSP). Analyses show that our hydrosol model faithfully reproduces the water-leaving contributions to RSP reflectance, and that the sensitivity of these contributions to Chlorophyll a concentration [Chl] in the ocean varies with the azimuth, height, and wavelength of observations. We also show that the impact of variations in CDOM on the polarized reflectance observed by the RSP at low altitude is comparable to or much less than the standard error of this reflectance whereas their effects in total reflectance may be substantial (i.e. up to >30%). Finally, we extend our study of polarized reflectance variations with [Chl] and CDOM to include results for simulated spaceborne observations.
Energy Technology Data Exchange (ETDEWEB)
Bjerke, M.A.
1983-02-01
A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, P.M.; Madsen, Kristoffer H; Lund, T.E.
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...
Numerical analyses of interaction of steel-fibre reinforced concrete slab model with subsoil
Directory of Open Access Journals (Sweden)
Jana Labudkova
2017-01-01
Full Text Available Numerical analyses of contact task were made with FEM. The test sample for the task was a steel-fibre reinforced concrete foundation slab model loaded during experimental loading test. Application of inhomogeneous half-space was used in FEM analyses. Results of FEM analyses were also confronted with the values measured during the experiment.
Energy Technology Data Exchange (ETDEWEB)
Al-Hamarneh, Ibrahim, E-mail: hamarnehibrahim@yahoo.com [Department of Physics, Faculty of Science, Al-Balqa Applied University, Salt 19117 (Jordan); Pedrow, Patrick [School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA 99164 (United States); Eskhan, Asma; Abu-Lail, Nehal [Gene and Linda Voiland School of Chemical Engineering and Bioengineering, Washington State University, Pullman, WA 99164 (United States)
2012-10-15
Highlights: Black-Right-Pointing-Pointer Surface hydrophilic property of surgical-grade 316L stainless steel was enhanced by Ar-O{sub 2} corona streamer plasma treatment. Black-Right-Pointing-Pointer Hydrophilicity, surface morphology, roughness, and chemical composition before and after plasma treatment were evaluated. Black-Right-Pointing-Pointer Contact angle measurements and surface-sensitive analyses techniques, including XPS and AFM, were carried out. Black-Right-Pointing-Pointer Optimum plasma treatment conditions of the SS 316L surface were determined. - Abstract: Surgical-grade 316L stainless steel (SS 316L) had its surface hydrophilic property enhanced by processing in a corona streamer plasma reactor using O{sub 2} gas mixed with Ar at atmospheric pressure. Reactor excitation was 60 Hz ac high-voltage (0-10 kV{sub RMS}) applied to a multi-needle-to-grounded screen electrode configuration. The treated surface was characterized with a contact angle tester. Surface free energy (SFE) for the treated stainless steel increased measurably compared to the untreated surface. The Ar-O{sub 2} plasma was more effective in enhancing the SFE than Ar-only plasma. Optimum conditions for the plasma treatment system used in this study were obtained. X-ray photoelectron spectroscopy (XPS) characterization of the chemical composition of the treated surfaces confirms the existence of new oxygen-containing functional groups contributing to the change in the hydrophilic nature of the surface. These new functional groups were generated by surface reactions caused by reactive oxidation of substrate species. Atomic force microscopy (AFM) images were generated to investigate morphological and roughness changes on the plasma treated surfaces. The aging effect in air after treatment was also studied.
Kirrane, Maria J; de Guzman, Lilia I; Holloway, Beth; Frake, Amanda M; Rinderer, Thomas E; Whelan, Pádraig M
2014-01-01
Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock.
Kirrane, Maria J.; de Guzman, Lilia I.; Holloway, Beth; Frake, Amanda M.; Rinderer, Thomas E.; Whelan, Pádraig M.
2015-01-01
Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an “actual brood removal assay” that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock. PMID:25909856
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models
Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko
2015-01-01
Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600
Sensitivity experiments to mountain representations in spectral models
Directory of Open Access Journals (Sweden)
U. Schlese
2000-06-01
Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.
Organic polyaromatic hydrocarbons as sensitizing model dyes for semiconductor nanoparticles.
Zhang, Yongyi; Galoppini, Elena
2010-04-26
The study of interfacial charge-transfer processes (sensitization) of a dye bound to large-bandgap nanostructured metal oxide semiconductors, including TiO(2), ZnO, and SnO(2), is continuing to attract interest in various areas of renewable energy, especially for the development of dye-sensitized solar cells (DSSCs). The scope of this Review is to describe how selected model sensitizers prepared from organic polyaromatic hydrocarbons have been used over the past 15 years to elucidate, through a variety of techniques, fundamental aspects of heterogeneous charge transfer at the surface of a semiconductor. This Review does not focus on the most recent or efficient dyes, but rather on how model dyes prepared from aromatic hydrocarbons have been used, over time, in key fundamental studies of heterogeneous charge transfer. In particular, we describe model chromophores prepared from anthracene, pyrene, perylene, and azulene. As the level of complexity of the model dye-bridge-anchor group compounds has increased, the understanding of some aspects of very complex charge transfer events has improved. The knowledge acquired from the study of the described model dyes is of importance not only for DSSC development but also to other fields of science for which electronic processes at the molecule/semiconductor interface are relevant.
Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.
van Erp, Sara; Mulder, Joris; Oberski, Daniel L
2017-11-27
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more
Stochastic sensitivity of a bistable energy model for visual perception
Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev
2017-01-01
Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.
Recursive Model Identification for the Evaluation of Baroreflex Sensitivity.
Le Rolle, Virginie; Beuchée, Alain; Praud, Jean-Paul; Samson, Nathalie; Pladys, Patrick; Hernández, Alfredo I
2016-12-01
A method for the recursive identification of physiological models of the cardiovascular baroreflex is proposed and applied to the time-varying analysis of vagal and sympathetic activities. The proposed method was evaluated with data from five newborn lambs, which were acquired during injection of vasodilator and vasoconstrictors and the results show a close match between experimental and simulated signals. The model-based estimation of vagal and sympathetic contributions were consistent with physiological knowledge and the obtained estimators of vagal and sympathetic activities were compared to traditional markers associated with baroreflex sensitivity. High correlations were observed between traditional markers and model-based indices.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Directory of Open Access Journals (Sweden)
Christopher W. Walmsley
2013-11-01
Full Text Available Finite element analysis (FEA is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation.Here we report an extensive sensitivity analysis where high resolution finite element (FE models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous, scaling (standardising volume, surface area, or length, tooth position (front, mid, or back tooth engagement, and linear load case (type of loading for each feeding type.Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different
Analyses of tumor-suppressor genes in germline mouse models of cancer.
Wang, Jingqiang; Abate-Shen, Cory
2014-08-01
Tumor-suppressor genes are critical regulators of growth and functioning of cells, whose loss of function contributes to tumorigenesis. Accordingly, analyses of the consequences of their loss of function in genetically engineered mouse models have provided important insights into mechanisms of human cancer, as well as resources for preclinical analyses and biomarker discovery. Nowadays, most investigations of genetically engineered mouse models of tumor-suppressor function use conditional or inducible alleles, which enable analyses in specific cancer (tissue) types and overcome the consequences of embryonic lethality of germline loss of function of essential tumor-suppressor genes. However, historically, analyses of genetically engineered mouse models based on germline loss of function of tumor-suppressor genes were very important as these early studies established the principle that loss of function could be studied in mouse cancer models and also enabled analyses of these essential genes in an organismal context. Although the cancer phenotypes of these early germline models did not always recapitulate the expected phenotypes in human cancer, these models provided the essential foundation for the more sophisticated conditional and inducible models that are currently in use. Here, we describe these "first-generation" germline models of loss of function models, focusing on the important lessons learned from their analyses, which helped in the design and analyses of "next-generation" genetically engineered mouse models. © 2014 Cold Spring Harbor Laboratory Press.
Luo, Chuan; Li, Zhaofu; Wu, Min; Jiang, Kaixia; Chen, Xiaomin; Li, Hengpeng
2017-09-01
sensitivity analyses on other models.
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Efficient transfer of sensitivity information in multi-component models
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Rabiti, Cristian
2011-01-01
In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)
Sensitivity analysis of a forest gap model concerning current and future climate variability
Energy Technology Data Exchange (ETDEWEB)
Lasch, P.; Suckow, F.; Buerger, G.; Lindner, M.
1998-07-01
The ability of a forest gap model to simulate the effects of climate variability and extreme events depends on the temporal resolution of the weather data that are used and the internal processing of these data for growth, regeneration and mortality. The climatological driving forces of most current gap models are based on monthly means of weather data and their standard deviations, and long-term monthly means are used for calculating yearly aggregated response functions for ecological processes. In this study, the results of sensitivity analyses using the forest gap model FORSKA{sub -}P and involving climate data of different resolutions, from long-term monthly means to daily time series, including extreme events, are presented for the current climate and for a climate change scenario. The model was applied at two sites with differing soil conditions in the federal state of Brandenburg, Germany. The sensitivity of the model concerning climate variations and different climate input resolutions is analysed and evaluated. The climate variability used for the model investigations affected the behaviour of the model substantially. (orig.)
Local sensitivity analysis of a distributed parameters water quality model
International Nuclear Information System (INIS)
Pastres, R.; Franco, D.; Pecenik, G.; Solidoro, C.; Dejak, C.
1997-01-01
A local sensitivity analysis is presented of a 1D water-quality reaction-diffusion model. The model describes the seasonal evolution of one of the deepest channels of the lagoon of Venice, that is affected by nutrient loads from the industrial area and heat emission from a power plant. Its state variables are: water temperature, concentrations of reduced and oxidized nitrogen, Reactive Phosphorous (RP), phytoplankton, and zooplankton densities, Dissolved Oxygen (DO) and Biological Oxygen Demand (BOD). Attention has been focused on the identifiability and the ranking of the parameters related to primary production in different mixing conditions
International Nuclear Information System (INIS)
Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe
2013-01-01
Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)
Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.
Melis, Alessandro; Clayton, Richard H; Marzo, Alberto
2017-12-01
One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.
Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales
International Nuclear Information System (INIS)
Krstic, Predrag S.
2014-01-01
Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.
Language Sensitivity, the RESPECT Model, and Continuing Education.
Aycock, Dawn M; Sims, Traci T; Florman, Terri; Casseus, Karis T; Gordon, Paula M; Spratling, Regena G
2017-11-01
Some words and phrases used by health care providers may be perceived as insensitive by patients, which could negatively affect patient outcomes and satisfaction. However, a distinct concept that can be used to describe and synthesize these words and phrases does not exist. The purpose of this article is to propose the concept of language sensitivity, defined as the use of respectful, supportive, and caring words with consideration for a patient's situation and diagnosis. Examples of how language sensitivity may be lacking in nurse-patient interactions are described, and solutions are provided using the RESPECT (Rapport, Environment/Equipment, Safety, Privacy, Encouragement, Caring/Compassion, and Tact) model. RESPECT can be used as a framework to inform and remind nurses about the importance of sensitivity when communicating with patients. Various approaches can be used by nurse educators to promote language sensitivity in health care. Case studies and a lesson plan are included. J Contin Educ Nurs. 2017;48(11):517-524. Copyright 2017, SLACK Incorporated.
Pressure Sensitive Paint Applied to Flexible Models Project
Schairer, Edward T.; Kushner, Laura Kathryn
2014-01-01
One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.
Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M
2015-07-01
Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.
International Nuclear Information System (INIS)
Crecy, Agnes de; Bazin, Pascal
2013-01-01
Uncertainty and sensitivity analyses, associated to best-estimate calculations become paramount for licensing processes and are known as BEPU (Best-Estimate Plus Uncertainties) methods. A recent activity such as the BEMUSE benchmark has shown that the present methods are mature enough for the system thermal-hydraulics codes, even if issues such as the quantification of the uncertainties of the input parameters, and especially, the physical models must be improved. But CFD codes are more and more used for fine 3-D modeling such as, for example, those necessary in dilution or stratification problems. The application of the BEPU methods to CFD codes becomes an issue that must be now addressed. That is precisely the goal of this paper. It consists of two main parts. In the chapter 2, the specificities of CFD codes for BEPU methods are listed, with focuses on the possible difficulties. In the chapter 3, the studies performed at CEA are described. It is important to note that CEA research in this field is only beginning and must not be viewed as a reference approach. (authors)
International Nuclear Information System (INIS)
Drouet, J.-L.; Capian, N.; Fiorelli, J.-L.; Blanfort, V.; Capitaine, M.; Duretz, S.; Gabrielle, B.; Martin, R.; Lardy, R.; Cellier, P.; Soussana, J.-F.
2011-01-01
Modelling complex systems such as farms often requires quantification of a large number of input factors. Sensitivity analyses are useful to reduce the number of input factors that are required to be measured or estimated accurately. Three methods of sensitivity analysis (the Morris method, the rank regression and correlation method and the Extended Fourier Amplitude Sensitivity Test method) were compared in the case of the CERES-EGC model applied to crops of a dairy farm. The qualitative Morris method provided a screening of the input factors. The two other quantitative methods were used to investigate more thoroughly the effects of input factors on output variables. Despite differences in terms of concepts and assumptions, the three methods provided similar results. Among the 44 factors under study, N 2 O emissions were mainly sensitive to the fraction of N 2 O emitted during denitrification, the maximum rate of nitrification, the soil bulk density and the cropland area. - Highlights: → Three methods of sensitivity analysis were compared in the case of a soil-crop model. → The qualitative Morris method provided a screening of the input factors. → The quantitative EFAST method provided a thorough analysis of the input factors. → The three methods provided similar results regarding sensitivity of N 2 O emissions. → N 2 O emissions were mainly sensitive to a few, especially four, input factors. - Three methods of sensitivity analysis were compared to analyse their efficiency in assessing the sensitivity of a complex soil-crop model to its input factors.
Sensitivity in forward modeled hyperspectral reflectance due to phytoplankton groups
Manzo, Ciro; Bassani, Cristiana; Pinardi, Monica; Giardino, Claudia; Bresciani, Mariano
2016-04-01
Phytoplankton is an integral part of the ecosystem, affecting trophic dynamics, nutrient cycling, habitat condition, and fisheries resources. The types of phytoplankton and their concentrations are used to describe the status of water and the processes inside of this. This study investigates bio-optical modeling of phytoplankton functional types (PFT) in terms of pigment composition demonstrating the capability of remote sensing to recognize freshwater phytoplankton. In particular, a sensitivity analysis of simulated hyperspectral water reflectance (with band setting of HICO, APEX, EnMAP, PRISMA and Sentinel-3) of productive eutrophic waters of Mantua lakes (Italy) environment is presented. The bio-optical model adopted for simulating the hyperspectral water reflectance takes into account the reflectance dependency on geometric conditions of light field, on inherent optical properties (backscattering and absorption coefficients) and on concentrations of water quality parameters (WQPs). The model works in the 400-750nm wavelength range, while the model parametrization is based on a comprehensive dataset of WQP concentrations and specific inherent optical properties of the study area, collected in field surveys carried out from May to September of 2011 and 2014. The following phytoplankton groups, with their specific absorption coefficients, a*Φi(λ), were used during the simulation: Chlorophyta, Cyanobacteria with phycocyanin, Cyanobacteria and Cryptophytes with phycoerythrin, Diatoms with carotenoids and mixed phytoplankton. The phytoplankton absorption coefficient aΦ(λ) is modelled by multiplying the weighted sum of the PFTs, Σpia*Φi(λ), with the chlorophyll-a concentration (Chl-a). To highlight the variability of water reflectance due to variation of phytoplankton pigments, the sensitivity analysis was performed by keeping constant the WQPs (i.e., Chl-a=80mg/l, total suspended matter=12.58g/l and yellow substances=0.27m-1). The sensitivity analysis was
Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.
2011-01-01
Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844
International Nuclear Information System (INIS)
Evans, Mary Anne; Scavia, Donald
2011-01-01
Increasing use of ecological models for management and policy requires robust evaluation of model precision, accuracy, and sensitivity to ecosystem change. We conducted such an evaluation of hypoxia models for the northern Gulf of Mexico and Chesapeake Bay using hindcasts of historical data, comparing several approaches to model calibration. For both systems we find that model sensitivity and precision can be optimized and model accuracy maintained within reasonable bounds by calibrating the model to relatively short, recent 3 year datasets. Model accuracy was higher for Chesapeake Bay than for the Gulf of Mexico, potentially indicating the greater importance of unmodeled processes in the latter system. Retrospective analyses demonstrate both directional and variable changes in sensitivity of hypoxia to nutrient loads.
Energy Technology Data Exchange (ETDEWEB)
Evans, Mary Anne; Scavia, Donald, E-mail: mevans@umich.edu, E-mail: scavia@umich.edu [School of Natural Resources and Environment, University of Michigan, Ann Arbor, MI 48109 (United States)
2011-01-15
Increasing use of ecological models for management and policy requires robust evaluation of model precision, accuracy, and sensitivity to ecosystem change. We conducted such an evaluation of hypoxia models for the northern Gulf of Mexico and Chesapeake Bay using hindcasts of historical data, comparing several approaches to model calibration. For both systems we find that model sensitivity and precision can be optimized and model accuracy maintained within reasonable bounds by calibrating the model to relatively short, recent 3 year datasets. Model accuracy was higher for Chesapeake Bay than for the Gulf of Mexico, potentially indicating the greater importance of unmodeled processes in the latter system. Retrospective analyses demonstrate both directional and variable changes in sensitivity of hypoxia to nutrient loads.
Temperature sensitivity of a numerical pollen forecast model
Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone
2016-04-01
Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.
A Workflow for Global Sensitivity Analysis of PBPK Models
Directory of Open Access Journals (Sweden)
Kevin eMcNally
2011-06-01
Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.
Relative sensitivity analysis of the predictive properties of sloppy models.
Myasnikova, Ekaterina; Spirov, Alexander
2018-01-25
Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.
Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet
Directory of Open Access Journals (Sweden)
Mariona Camps-Bossacoma
2017-01-01
Full Text Available Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.
Gut Microbiota in a Rat Oral Sensitization Model: Effect of a Cocoa-Enriched Diet.
Camps-Bossacoma, Mariona; Pérez-Cano, Francisco J; Franch, Àngels; Castell, Margarida
2017-01-01
Increasing evidence is emerging suggesting a relation between dietary compounds, microbiota, and the susceptibility to allergic diseases, particularly food allergy. Cocoa, a source of antioxidant polyphenols, has shown effects on gut microbiota and the ability to promote tolerance in an oral sensitization model. Taking these facts into consideration, the aim of the present study was to establish the influence of an oral sensitization model, both alone and together with a cocoa-enriched diet, on gut microbiota. Lewis rats were orally sensitized and fed with either a standard or 10% cocoa diet. Faecal microbiota was analysed through metagenomics study. Intestinal IgA concentration was also determined. Oral sensitization produced few changes in intestinal microbiota, but in those rats fed a cocoa diet significant modifications appeared. Decreased bacteria from the Firmicutes and Proteobacteria phyla and a higher percentage of bacteria belonging to the Tenericutes and Cyanobacteria phyla were observed. In conclusion, a cocoa diet is able to modify the microbiota bacterial pattern in orally sensitized animals. As cocoa inhibits the synthesis of specific antibodies and also intestinal IgA, those changes in microbiota pattern, particularly those of the Proteobacteria phylum, might be partially responsible for the tolerogenic effect of cocoa.
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
International Nuclear Information System (INIS)
2014-12-01
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Sensitivity and uncertainty analysis of a sediment transport model: a global approach
Chang, C.; Yang, J.; Tung, Y.
1993-12-01
Computerized sediment transport models are frequently employed to quantitatively simulate the movement of sediment materials in rivers. In spite of the deterministic nature of the models, the outputs are subject to uncertainty due to the inherent variability of many input parameters in time and in space, along with the lack of complete understanding of the involved processes. The commonly used first-order method for sensitivity and uncertainty analyses is to approximate a model by linear expansion at a selected point. Conclusions from the first-order method could be of limited use if the model responses drastically vary at different points in parameter space. To obtain the global sensitivity and uncertainty features of a sediment transport model over a larger input parameter space, the Latin hypercubic sampling technique along with regression procedures were employed. For the purpose of illustrating the methodologies, the computer model HEC2-SR was selected in this study. Through an example application, the results about the parameters sensitivity and uncertainty of water surface, bed elevation and sediment discharge were discussed.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Schlegel, N.; Larour, E. Y.; Box, J. E.
2015-12-01
During July of 2012, the percentage of the Greenland surface exposed to melt was the largest in recorded history. And, even though evidence of increased melt rates had been captured by remote sensing observations throughout the last decade, this particular event took the community by surprise. How Greenland ice flow will respond to such an event or to increased frequencies of extreme melt events in the future is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification (UQ) tools embedded within the Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force surface mass balance (SMB) in various regions of the ice sheet and 2) determine the dynamic response of Greenland outlet glaciers to variations in SMB. First, we perform sensitivity analyses to determine how perturbations in SMB affect model output; results allow us to investigate the locations where variations most significantly affect ice flow, and on what spatial scales. Next, we apply Monte-Carlo style sampling analyses to determine how errors in SMB propagate through the model as uncertainties in estimates of Greenland ice discharge and regional mass balance. This work is performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere Program.
Model parameters estimation and sensitivity by genetic algorithms
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca
2003-01-01
In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The
A computational model that predicts behavioral sensitivity to intracortical microstimulation
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.
2017-02-01
Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.
Sensitivity Analysis of a Riparian Vegetation Growth Model
Directory of Open Access Journals (Sweden)
Michael Nones
2016-11-01
Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.
Understanding earth system models: how Global Sensitivity Analysis can help
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
A Sensitivity Analysis of fMRI Balloon Model
Zayane, Chadia
2015-04-22
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide
Energy Technology Data Exchange (ETDEWEB)
Wainwright, Haruko Murakami [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Univ. of California, Berkeley, CA (United States); Finsterle, Stefan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Univ. of California, Berkeley, CA (United States)
2016-07-15
This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.
Pey, Alexis; Catanéo, Jérôme; Forcioli, Didier; Merle, Pierre-Laurent; Furla, Paola
2013-07-01
The only symbiotic Mediterranean gorgonian, Eunicella singularis, has faced several mortality events connected to abnormal high temperatures. Since thermotolerance data remain scarce, heat-induced necrosis was monitored in aquarium by morphometric analysis. Gorgonian tips were sampled at two sites: Medes (Spain) and Riou (France) Islands, and at two depths: -15 m and-35 m. Although coming from contrasting thermal regimes, seawater above 28 °C led to rapid and complete tissue necrosis for all four populations. However, at 27 °C, the time length leading to 50% tissue necrosis allowed us to classify samples within three classes of thermal sensitivity. Irrespectively of the depth, Medes specimens were either very sensitive or resistant, while Riou fragments presented a medium sensitivity. Microsatellite analysis revealed that host and symbiont were genetically differentiated between sites, but not between depths. Finally, these genetic differentiations were not directly correlated to a specific thermal sensitivity whose molecular bases remain to be discovered. Copyright © 2013 Académie des sciences. Published by Elsevier SAS. All rights reserved.
Varroa destructor continues to threaten colonies of European honey bees. General hygiene and more specific VarroaVarroa Sensitive Hygiene (VSH) provide resistance toward the Varroa mite in a number of stocks. In this study, Russian (RHB) and Italian honey bees were assessed for the VSH trait. Two...
Isoprene emissions modelling for West Africa: MEGAN model evaluation and sensitivity analysis
Directory of Open Access Journals (Sweden)
J. Ferreira
2010-09-01
Full Text Available Isoprene emissions are the largest source of reactive carbon to the atmosphere, with the tropics being a major source region. These natural emissions are expected to change with changing climate and human impact on land use. As part of the African Monsoon Multidisciplinary Analyses (AMMA project the Model of Emissions of Gases and Aerosols from Nature (MEGAN has been used to estimate the spatial and temporal distribution of isoprene emissions over the West African region. During the AMMA field campaign, carried out in July and August 2006, isoprene mixing ratios were measured on board the FAAM BAe-146 aircraft. These data have been used to make a qualitative evaluation of the model performance.
MEGAN was firstly applied to a large area covering much of West Africa from the Gulf of Guinea in the south to the desert in the north and was able to capture the large scale spatial distribution of isoprene emissions as inferred from the observed isoprene mixing ratios. In particular the model captures the transition from the forested area in the south to the bare soils in the north, but some discrepancies have been identified over the bare soil, mainly due to the emission factors used. Sensitivity analyses were performed to assess the model response to changes in driving parameters, namely Leaf Area Index (LAI, Emission Factors (EF, temperature and solar radiation.
A high resolution simulation was made of a limited area south of Niamey, Niger, where the higher concentrations of isoprene were observed. This is used to evaluate the model's ability to simulate smaller scale spatial features and to examine the influence of the driving parameters on an hourly basis through a case study of a flight on 17 August 2006.
This study highlights the complex interactions between land surface processes and the meteorological dynamics and chemical composition of the PBL. This has implications for quantifying the impact of biogenic emissions
Sensitivity Study on Aging Elements Using Degradation Model
Energy Technology Data Exchange (ETDEWEB)
Kim, Man-Woong; Lee, Sang-Kyu; Kim, Hyun-Koon; Ryu, Yong-Ho [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Choi, Yong Won; Park, Chang Hwan; Lee, Un Chul [Seoul National Univ., Seoul (Korea, Republic of)
2008-05-15
To evaluate the safety margin effects for performance degradation of system and components due to ageing for CANDU reactors, it is required to identify the aging elements for systems and components and to develop the degradation model for each element aimed to predict the aging value during operating year adequately. However, it is recognized that the degradation model is not an independent parameter to assess the evaluation of safety margin change due to ageing. For example, the moderator temperature coefficient (MTC) is an important factor of power distribution and is affected by coolant flow rate. Hence all the aging elements relevant with the flow rate at different system or components could be influenced the MCT. Therefore, it is necessary to identify the major elements affecting the safety margin. In this regard, this study investigate the coupled effect to concern the safety margin using a sensitivity analysis is conducted.
Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2017-12-01
This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Sensitivity analysis of the terrestrial food chain model FOOD III
International Nuclear Information System (INIS)
Zach, Reto.
1980-10-01
As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)
International Nuclear Information System (INIS)
Reeves, J.H.; Arthur, R.J.; Brodzinski, R.L.
1992-06-01
Cobalt foils and stainless steel samples were analyzed for induced 6O Co activity with both an ultra-low background germanium gamma-ray spectrometer and with a large NaI(Tl) multidimensional spectrometer, both of which use electronic anticoincidence shielding to reduce background counts resulting from cosmic rays. Aluminum samples were analyzed for 22 Na. The results, in addition to the relative sensitivities and precisions afforded by the two methods, are presented
Sensitivity of precipitation to parameter values in the community atmosphere model version 5
Energy Technology Data Exchange (ETDEWEB)
Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael
2014-03-01
One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.
Directory of Open Access Journals (Sweden)
Benoit ePallas
2013-11-01
Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.
Impact of sensor and measurement timing errors on model-based insulin sensitivity.
Pretty, Christopher G; Signal, Matthew; Fisk, Liam; Penning, Sophie; Le Compte, Aaron; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2014-05-01
A model-based insulin sensitivity parameter (SI) is often used in glucose-insulin system models to define the glycaemic response to insulin. As a parameter identified from clinical data, insulin sensitivity can be affected by blood glucose (BG) sensor error and measurement timing error, which can subsequently impact analyses or glycaemic variability during control. This study assessed the impact of both measurement timing and BG sensor errors on identified values of SI and its hour-to-hour variability within a common type of glucose-insulin system model. Retrospective clinical data were used from 270 patients admitted to the Christchurch Hospital ICU between 2005 and 2007 to identify insulin sensitivity profiles. We developed error models for the Abbott Optium Xceed glucometer and measurement timing from clinical data. The effect of these errors on the re-identified insulin sensitivity was investigated by Monte-Carlo analysis. The results of the study show that timing errors in isolation have little clinically significant impact on identified SI level or variability. The clinical impact of changes to SI level induced by combined sensor and timing errors is likely to be significant during glycaemic control. Identified values of SI were mostly (90th percentile) within 29% of the true value when influenced by both sources of error. However, these effects may be overshadowed by physiological factors arising from the critical condition of the patients or other under-modelled or un-modelled dynamics. Thus, glycaemic control protocols that are designed to work with data from glucometers need to be robust to these errors and not be too aggressive in dosing insulin. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
DEFF Research Database (Denmark)
Yuan, Hao; Sin, Gürkan
2011-01-01
filtration coefficients and the CTRW equation expressed in Laplace space, are selected to simulate eight experiments. These experiments involve both porous media and colloid-medium interactions of different heterogeneity degrees. The uncertainty of elliptic equation predictions with distributed filtration......Uncertainty and sensitivity analyses are carried out to investigate the predictive accuracy of the filtration models for describing non-Fickian transport and hyperexponential deposition. Five different modeling approaches, involving the elliptic equation with different types of distributed...... coefficients is larger than that with a single filtration coefficient. The uncertainties of model predictions from the elliptic equation and CTRW equation in Laplace space are minimal for solute transport. Higher uncertainties of parameter estimation and model outputs are observed in the cases with the porous...
Sensitivity analysis of the noise-induced oscillatory multistability in Higgins model of glycolysis
Ryashko, Lev
2018-03-01
A phenomenon of the noise-induced oscillatory multistability in glycolysis is studied. As a basic deterministic skeleton, we consider the two-dimensional Higgins model. The noise-induced generation of mixed-mode stochastic oscillations is studied in various parametric zones. Probabilistic mechanisms of the stochastic excitability of equilibria and noise-induced splitting of randomly forced cycles are analysed by the stochastic sensitivity function technique. A parametric zone of supersensitive Canard-type cycles is localized and studied in detail. It is shown that the generation of mixed-mode stochastic oscillations is accompanied by the noise-induced transitions from order to chaos.
Sensitivity analysis using the FRAPCON-1/EM: development of a calculation model for licensing
International Nuclear Information System (INIS)
Chapot, J.L.C.
1985-01-01
The FRAPCON-1/EM is version of the FRAPCON-1 code which analyses fuel rods performance under normal operation conditions. This version yields conservative results and is used by the NRC in its licensing activities. A sensitivity analysis was made, to determine the combination of models from the FRAPCON-1/EM which yields the most conservative results for a typical Angra-1 reactor fuel rod. The present analysis showed that this code can be used as a calculation tool for the licensing of the Angra-1 reload. (F.E.) [pt
Random regression analyses using B-splines to model growth of Australian Angus cattle
Directory of Open Access Journals (Sweden)
Meyer Karin
2005-09-01
Full Text Available Abstract Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
Levy, M. C.
2013-12-01
Evapotranspiration (ET) is a dominant component of the global water balance and in the study of hydroclimatic effects of climate change. However, its computation remains challenging due to the multiple environmental factors that influence the magnitude of ET flux. Therefore, understanding the sensitivity of ET models to changes in climate and vegetation inputs remains a major concern for hydrologists, biometeorologists, and climatologists. To date, sensitivity analyses (SAs) of evapotranspiration (ET) models are incomplete on two counts: 1) contemporary, data-driven SAs do not account for the effects of both climate and vegetation input variables on model output (ET estimates); and 2) SAs do not account for the effects of input variable correlation on model output. This is problematic because of the potentially dominant role of vegetation in controlling ET, and the non-trivial interactions between climate variables, and climate and vegetation variables. Ignoring the role of interactions between variables limits the value of SAs for reducing model dimensionality and guiding model calibration, and may lead to incorrect assessments of environmental system response to climate change, where the synchronies between climate variables change over time and space. The problems addressed by this study are the issues identified above: the lack of accounting for both climate and vegetation inputs, and correlated inputs, on ET model SAs. This study: 1) performs a SA of the standardized American Society of Civil Engineers (ASCE) Penman-Monteith (PM) equation for reference ET to both climate and vegetation variables using a mixed empirical and simulation based global Sobol' SA; and 2) performs a SA of ASCE PM reference ET to both climate and vegetation variables through a simulation-based analysis using a new Sobol' SA analogue developed for models with correlated input variables. At the time of completion, this study constitutes the first use of a Sobol' SA (Sobol', 2001
Aldebert, Clement; Kooi, Bob W; Nerini, David; Poggiale, Jean-Christophe
2018-03-14
Many current issues in ecology require predictions made by mathematical models, which are built on somewhat arbitrary choices. Their consequences are quantified by sensitivity analysis to quantify how changes in model parameters propagate into an uncertainty in model predictions. An extension called structural sensitivity analysis deals with changes in the mathematical description of complex processes like predation. Such processes are described at the population scale by a specific mathematical function taken among similar ones, a choice that can strongly drive model predictions. However, it has only been studied in simple theoretical models. Here, we ask whether structural sensitivity is a problem of oversimplified models. We found in predator-prey models describing chemostat experiments that these models are less structurally sensitive to the choice of a specific functional response if they include mass balance resource dynamics and individual maintenance. Neglecting these processes in an ecological model (for instance by using the well-known logistic growth equation) is not only an inappropriate description of the ecological system, but also a source of more uncertain predictions. Copyright © 2018. Published by Elsevier Ltd.
USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES
Directory of Open Access Journals (Sweden)
Constantin ANGHELACHE
2011-10-01
Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.
Global sensitivity analysis of a dynamic model for gene expression in Drosophila embryos
McCarthy, Gregory D.; Drewell, Robert A.
2015-01-01
It is well known that gene regulation is a tightly controlled process in early organismal development. However, the roles of key processes involved in this regulation, such as transcription and translation, are less well understood, and mathematical modeling approaches in this field are still in their infancy. In recent studies, biologists have taken precise measurements of protein and mRNA abundance to determine the relative contributions of key factors involved in regulating protein levels in mammalian cells. We now approach this question from a mathematical modeling perspective. In this study, we use a simple dynamic mathematical model that incorporates terms representing transcription, translation, mRNA and protein decay, and diffusion in an early Drosophila embryo. We perform global sensitivity analyses on this model using various different initial conditions and spatial and temporal outputs. Our results indicate that transcription and translation are often the key parameters to determine protein abundance. This observation is in close agreement with the experimental results from mammalian cells for various initial conditions at particular time points, suggesting that a simple dynamic model can capture the qualitative behavior of a gene. Additionally, we find that parameter sensitivites are temporally dynamic, illustrating the importance of conducting a thorough global sensitivity analysis across multiple time points when analyzing mathematical models of gene regulation. PMID:26157608
Modeling high-efficiency quantum dot sensitized solar cells.
González-Pedro, Victoria; Xu, Xueqing; Mora-Seró, Iván; Bisquert, Juan
2010-10-26
With energy conversion efficiencies in continuous growth, quantum dot sensitized solar cells (QDSCs) are currently under an increasing interest, but there is an absence of a complete model for these devices. Here, we compile the latest developments in this kind of cells in order to attain high efficiency QDSCs, modeling the performance. CdSe QDs have been grown directly on a TiO(2) surface by successive ionic layer adsorption and reaction to ensure high QD loading. ZnS coating and previous growth of CdS were analyzed. Polysulfide electrolyte and Cu(2)S counterelectrodes were used to provide higher photocurrents and fill factors, FF. Incident photon-to-current efficiency peaks as high as 82%, under full 1 sun illumination, were obtained, which practically overcomes the photocurrent limitation commonly observed in QDSCs. High power conversion efficiency of up to 3.84% under full 1 sun illumination (V(oc) = 0.538 V, j(sc) = 13.9 mA/cm(2), FF = 0.51) and the characterization and modeling carried out indicate that recombination has to be overcome for further improvement of QDSC.
Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G
2014-11-01
Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.
Sensitivity of modeled ozone concentrations to uncertainties in biogenic emissions
International Nuclear Information System (INIS)
Roselle, S.J.
1992-06-01
The study examines the sensitivity of regional ozone (O3) modeling to uncertainties in biogenic emissions estimates. The United States Environmental Protection Agency's (EPA) Regional Oxidant Model (ROM) was used to simulate the photochemistry of the northeastern United States for the period July 2-17, 1988. An operational model evaluation showed that ROM had a tendency to underpredict O3 when observed concentrations were above 70-80 ppb and to overpredict O3 when observed values were below this level. On average, the model underpredicted daily maximum O3 by 14 ppb. Spatial patterns of O3, however, were reproduced favorably by the model. Several simulations were performed to analyze the effects of uncertainties in biogenic emissions on predicted O3 and to study the effectiveness of two strategies of controlling anthropogenic emissions for reducing high O3 concentrations. Biogenic hydrocarbon emissions were adjusted by a factor of 3 to account for the existing range of uncertainty in these emissions. The impact of biogenic emission uncertainties on O3 predictions depended upon the availability of NOx. In some extremely NOx-limited areas, increasing the amount of biogenic emissions decreased O3 concentrations. Two control strategies were compared in the simulations: (1) reduced anthropogenic hydrocarbon emissions, and (2) reduced anthropogenic hydrocarbon and NOx emissions. The simulations showed that hydrocarbon emission controls were more beneficial to the New York City area, but that combined NOx and hydrocarbon controls were more beneficial to other areas of the Northeast. Hydrocarbon controls were more effective as biogenic hydrocarbon emissions were reduced, whereas combined NOx and hydrocarbon controls were more effective as biogenic hydrocarbon emissions were increased
Directory of Open Access Journals (Sweden)
Jennifer M Tsuruda
Full Text Available Varroa mites (V. destructor are a major threat to honey bees (Apis melilfera and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL. Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21 and a suggestive QTL on chromosome 1 (LOD = 1.95. The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.
Directory of Open Access Journals (Sweden)
J. D. Herman
2013-07-01
Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.
Brouwer, A F; Grimberg, S J; Powers, S E
2012-12-01
The Dynamic Anaerobic Reactor & Integrated Energy System (DARIES) model has been developed as a biogas and electricity production model of a dairy farm anaerobic digester system. DARIES, which incorporates the Anaerobic Digester Model No. 1 (ADM1) and simulations of both combined heat and power (CHP) and digester heating systems, may be run in either completely mixed or plug flow reactor configurations. DARIES biogas predictions were shown to be statistically coincident with measured data from eighteen full-scale dairy operations in the northeastern United States. DARIES biogas predictions were more accurate than predictions made by the U.S. AgSTAR model FarmWare 3.4. DARIES electricity production predictions were verified against data collected by the NYSERDA DG/CHP Integrated Data System. Preliminary sensitivity analysis demonstrated that DARIES output was most sensitive to influent flow rate, chemical oxygen demand (COD), and biodegradability, and somewhat sensitive to hydraulic retention time and digester temperature.
Modelling sensitivity and uncertainty in a LCA model for waste management systems - EASETECH
DEFF Research Database (Denmark)
Damgaard, Anders; Clavreul, Julie; Baumeister, Hubert
2013-01-01
In the new model, EASETECH, developed for LCA modelling of waste management systems, a general approach for sensitivity and uncertainty assessment for waste management studies has been implemented. First general contribution analysis is done through a regular interpretation of inventory and impact...
Sensitivity analysis of alkaline plume modelling: influence of mineralogy
International Nuclear Information System (INIS)
Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.
2010-01-01
Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both
Directory of Open Access Journals (Sweden)
Chang-Hoon Sim
2018-01-01
Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.
International Nuclear Information System (INIS)
Kim, Kap-Sun; Kim, Jong-Soo; Choi, Kyu-Sup; Shin, Tae-Myung; Yun, Hyun-Do
2010-01-01
In Part 1 of this study, an advanced numerical simulation method was proposed to investigate the impact characteristics of the KN-18 spent nuclear fuel (SNF) transport cask recently developed in Korea and verified against the experimental results. In this study, sensitivity analyses are carried out using the proposed numerical simulation method to investigate the effects of the various modeling and design parameters, such as material model assumption, modeling methodology, analytical assumptions, and design variables that can affect the impact characteristics of a cask and the accuracy of the numerical results. These parametric analyses were also performed to provide a basis for correlations with test results that is closer to reality than merely conservative as a means of benchmarking the numerical models. In addition, the parametric analysis results are compared against the experimental results, and the sensitivities of each parameter are summarized to provide references for the future design and analysis of SNF transport casks.
Probabilistic sensitivity analysis for the 'initial defect in the canister' reference model
International Nuclear Information System (INIS)
Cormenzana, J. L.
2013-08-01
In Posiva Oy's Safety Case 'TURVA-2012' the repository system scenarios leading to radionuclide releases have been identified in Formulation of Radionuclide Release Scenarios. Three potential causes of canister failure and radionuclide release are considered: (i) the presence of an initial defect in the copper shell of one canister that penetrates the shell completely, (ii) corrosion of the copper overpack, that occurs more rapidly if buffer density is reduced, e.g. by erosion, (iii) shear movement on fractures intersecting the deposition hole. All three failure modes are analysed deterministically in Assessment of Radionuclide Release Scenarios, and for the 'initial defect in the canister' reference model a probabilistic sensitivity analysis (PSA) has been carried out. The main steps of the PSA have been: quantification of the uncertainties in the model input parameters through the creation of probability density distributions (PDFs), Monte Carlo simulations of the evolution of the system up to 106 years using parameters values sampled from the previous PDFs. Monte Carlo simulations with 10,000 individual calculations (realisations) have been used in the PSA, quantification of the uncertainty in the model outputs due to uncertainty in the input parameters (uncertainty analysis), and identification of the parameters whose uncertainty have the greatest effect on the uncertainty in the model outputs (sensitivity analysis) Since the biosphere is not included in the Monte Carlo simulations of the system, the model outputs studied are not doses, but total and radionuclide-specific normalised release rates from the near-field and to the biosphere. These outputs are calculated dividing the activity release rates by the constraints on the activity fluxes to the environment set out by the Finnish regulator. Two different cases are analysed in the PSA: (i) the 'hole forever' case, in which the small hole through the copper overpack remains unchanged during the assessment
Present status of theories and data analyses of mathematical models for carcinogenesis
International Nuclear Information System (INIS)
Kai, Michiaki; Kawaguchi, Isao
2007-01-01
Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)
Kim, Jiae; Jobe, Ousman; Peachman, Kristina K; Michael, Nelson L; Robb, Merlin L; Rao, Mangala; Rao, Venigalla B
2017-08-01
Development of vaccines capable of eliciting broadly neutralizing antibodies (bNAbs) is a key goal to controlling the global AIDS epidemic. To be effective, bNAbs must block the capture of HIV-1 to prevent viral acquisition and establishment of reservoirs. However, the role of bNAbs, particularly during initial exposure of primary viruses to host cells, has not been fully examined. Using a sensitive, quantitative, and high-throughput qRT-PCR assay, we found that primary viruses were captured by host cells and converted into a trypsin-resistant form in less than five minutes. We discovered, unexpectedly, that bNAbs did not block primary virus capture, although they inhibited the capture of pseudoviruses/IMCs and production of progeny viruses at 48h. Further, viruses escaped bNAb inhibition unless the bNAbs were present in the initial minutes of exposure of virus to host cells. These findings will have important implications for HIV-1 vaccine design and determination of vaccine efficacy. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Analysis of Sea Ice Cover Sensitivity in Global Climate Model
Directory of Open Access Journals (Sweden)
V. P. Parhomenko
2014-01-01
Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters
Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins
2014-11-01
Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.
Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City
Directory of Open Access Journals (Sweden)
M. Zavala
2009-01-01
Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O_{3}, carbon monoxide (CO and nitrogen oxides (NO_{x} suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NO_{x} and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NO_{x}, and the CO/NO_{x} ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NO_{x} emission factors do not show any strong trend, effectively reducing the ambient VOC/NO_{x} ratio.
This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with
Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures
Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.
2017-12-01
Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.
Yondo, Raul; Andrés, Esther; Valero, Eusebio
2018-01-01
Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial
Testing sensitivity of the LISFLOOD subgrid hydraulic model to SAR image derived information
Wood, Melissa; Bates, Paul; Neal, Jeff; Hostache, Renaud; Matgen, Patrick; Chini, Marco; Giustarini, Laura
2013-04-01
analyse the sensitivity of the model. The results will show which parameters the LISFLOOD subgrid model are most sensitive to, for the investigated test case.
N. Sczygiol; R. Dyja
2007-01-01
Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM). The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods....
Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies
Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.
2008-01-01
A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been
Vredenberg, W.J.
2011-01-01
In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with
Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.
Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter
2017-12-01
The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.
Brown, Russell W; Schlitt, Marjorie A; Owens, Alex S; DePreter, Caitlynn C; Cummins, Elizabeth D; Kirby, Seth L; Gill, W Drew; Burgess, Katherine C
2018-01-01
The current study analyzed the effects of environmental enrichment versus isolation housing on the behavioral sensitization to nicotine in the neonatal quinpirole (NQ; dopamine D2-like agonist) model of dopamine D2 receptor supersensitivity, a rodent model of schizophrenia. NQ treatment in rats increases dopamine D2 receptor sensitivity throughout the animal's lifetime, consistent with schizophrenia. Animals were administered NQ (1 mg/kg) or saline (NS) from postnatal day (P)1 to P21, weaned, and immediately placed into enriched housing or isolated in wire cages throughout the experiment. Rats were behaviorally sensitized to nicotine (0.5 mg/kg base) or saline every consecutive day from P38 to P45, and brain tissue was harvested at P46. Results revealed that neither housing condition reduced nicotine sensitization in NQ rats, whereas enrichment reduced sensitization to nicotine in NS-treated animals. The nucleus accumbens (NAcc) was analyzed for glial cell line-derived neurotrophic factor (GDNF), a neurotrophin important in dopamine plasticity. Results were complex, and revealed that NAcc GDNF was increased in animals given nicotine, regardless of housing condition. Further, enrichment increased GDNF in NQ rats regardless of adolescent drug treatment and in NS-treated rats given nicotine, but did not increase GDNF in NS-treated controls compared to the isolated housing condition. This study demonstrates that environmental experience has a prominent impact on the behavioral and the neural plasticity NAcc response to nicotine in adolescence. © 2018 S. Karger AG, Basel.
Analysing and controlling the tax evasion dynamics via majority-vote model
International Nuclear Information System (INIS)
Lima, F W S
2010-01-01
Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.
Analysing and controlling the tax evasion dynamics via majority-vote model
Energy Technology Data Exchange (ETDEWEB)
Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)
2010-09-01
Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.
A model for perception-based identification of sensitive skin.
Richters, R J H; Uzunbajakava, N E; Hendriks, J C M; Bikker, J-W; van Erp, P E J; van de Kerkhof, P C M
2017-02-01
With high prevalence of sensitive skin (SS), lack of strong evidence on pathomechanisms, consensus on associated symptoms, proof of existence of 'general' SS and tools to recruit subjects, this topic attracts increasing attention of research. To create a model for selecting subjects in studies on SS by identifying a complete set of self-reported SS characteristics and factors discriminatively describing it. A survey (n = 3058) was conducted, comprising questions regarding socio-demographics, atopy, skin characteristics, personal care, degree of self-assessed SS and subjective and objective reactions to endogenous and exogenous factors. Exploratory factor analysis on 481 questionnaires was performed to identify underlying dimensions and multivariate logistic regression to find contributing variables to the likelihood of reporting SS. The prevalence of SS was found to be 41%, and 56% of SS subjects reports a concomitant atopic condition. The most discriminative were the eliciting factors toiletries and emotions, and not specific skin symptoms in general. Triggers of different origins seem to elicit SS, it is not defined by concomitant skin diseases only, suggesting existence of 'general' SS. A multifactorial questionnaire could be a better diagnostic than a one-dimensional provocative test. © 2016 European Academy of Dermatology and Venereology.
Position-sensitive transition edge sensor modeling and results
Energy Technology Data Exchange (ETDEWEB)
Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline
2004-03-11
We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.
Feedbacks, climate sensitivity, and the limits of linear models
Rugenstein, M.; Knutti, R.
2015-12-01
The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.
DEFF Research Database (Denmark)
Sun, Shu; Rappaport, Theodore S.; Thomas, Timothy
2016-01-01
This paper compares three candidate large-scale propagation path loss models for use over the entire microwave and millimeter-wave (mmWave) radio spectrum: the alpha–beta–gamma (ABG) model, the close-in (CI) free-space reference distance model, and the CI model with a frequency-weighted path loss...... the accuracy and sensitivity of these models using measured data from 30 propagation measurement data sets from 2 to 73 GHz over distances ranging from 4 to 1238 m. A series of sensitivity analyses of the three models shows that the physically based two-parameter CI model and three-parameter CIF model offer...
Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan
Milando, Chad W.; Batterman, Stuart A.
2018-05-01
The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.
Directory of Open Access Journals (Sweden)
W. Castaings
2009-04-01
Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.
In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.
It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.
For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.
Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
International Nuclear Information System (INIS)
Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis were used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The following results were obtained in tests to check the robustness of the analysis techniques: two independent Latin hypercube samples produced similar uncertainty and sensitivity analysis results; setting important variables to best-estimate values produced substantial reductions in uncertainty, while setting the less important variables to best-estimate values had little effect on uncertainty; similar sensitivity analysis results were obtained when the original uniform and loguniform distributions assigned to the 34 imprecisely known input variables were changed to left-triangular distributions and then to right-triangular distributions; and analyses with rank-transformed and logarithmically-transformed data produced similar results and substantially outperformed analyses with raw (i.e., untransformed) data
Assessment of decision making models in sensitive technology: the nuclear energy case
International Nuclear Information System (INIS)
Silva, Eduardo Ramos Ferreira da
2007-01-01
In this paper a bibliographic review is proceeded on the decision making processes approaching the sensitive technologies (the military and civilian uses as well), and the nuclear technology herself. It is made a correlation among the development of the nuclear technology and the decision making processes, showing that from 70 decade on, such processes are connected to the national security doctrines influenced by the Brazilian War College. So, every time that the national security is altered, so is the master line of the decision making process altered. In the Brazil case, the alteration appeared from the World War II up to the new proposals coming out from the Ministry of Defense are shown related to the nuclear technology. The existent models are analysed with a conclusion that such models are unveiling at the present situation of the moment, concerning to the nuclear technology
Multivariate models for skin sensitization hazard and potency
One of the top priorities being addressed by ICCVAM is the identification and validation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events have been well characterized in an adverse outcome pathw...
Liou, Shwu-Ru
2009-01-01
To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.
A model finite-element to analyse the mechanical behavior of a PWR fuel rod
International Nuclear Information System (INIS)
Galeao, A.C.N.R.; Tanajura, C.A.S.
1988-01-01
A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt
Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model
Directory of Open Access Journals (Sweden)
Gareau, Alexandre
2016-09-01
Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters
Caraballo, R.
2016-11-01
According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Empirical analyses of a choice model that captures ordering among attribute values
DEFF Research Database (Denmark)
Mabit, Stefan Lindhard
2017-01-01
an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...
Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R
2016-09-01
A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.
Xu, Zexuan; Hu, Bill X.; Ye, Ming
2018-01-01
Long-distance seawater intrusion has been widely observed through the subsurface conduit system in coastal karst aquifers as a source of groundwater contaminant. In this study, seawater intrusion in a dual-permeability karst aquifer with conduit networks is studied by the two-dimensional density-dependent flow and transport SEAWAT model. Local and global sensitivity analyses are used to evaluate the impacts of boundary conditions and hydrological characteristics on modeling seawater intrusion in a karst aquifer, including hydraulic conductivity, effective porosity, specific storage, and dispersivity of the conduit network and of the porous medium. The local sensitivity analysis evaluates the parameters' sensitivities for modeling seawater intrusion, specifically in the Woodville Karst Plain (WKP). A more comprehensive interpretation of parameter sensitivities, including the nonlinear relationship between simulations and parameters, and/or parameter interactions, is addressed in the global sensitivity analysis. The conduit parameters and boundary conditions are important to the simulations in the porous medium because of the dynamical exchanges between the two systems. The sensitivity study indicates that salinity and head simulations in the karst features, such as the conduit system and submarine springs, are critical for understanding seawater intrusion in a coastal karst aquifer. The evaluation of hydraulic conductivity sensitivity in the continuum SEAWAT model may be biased since the conduit flow velocity is not accurately calculated by Darcy's equation as a function of head difference and hydraulic conductivity. In addition, dispersivity is no longer an important parameter in an advection-dominated karst aquifer with a conduit system, compared to the sensitivity results in a porous medium aquifer. In the end, the extents of seawater intrusion are quantitatively evaluated and measured under different scenarios with the variabilities of important parameters
Directory of Open Access Journals (Sweden)
Z. Xu
2018-01-01
Full Text Available Long-distance seawater intrusion has been widely observed through the subsurface conduit system in coastal karst aquifers as a source of groundwater contaminant. In this study, seawater intrusion in a dual-permeability karst aquifer with conduit networks is studied by the two-dimensional density-dependent flow and transport SEAWAT model. Local and global sensitivity analyses are used to evaluate the impacts of boundary conditions and hydrological characteristics on modeling seawater intrusion in a karst aquifer, including hydraulic conductivity, effective porosity, specific storage, and dispersivity of the conduit network and of the porous medium. The local sensitivity analysis evaluates the parameters' sensitivities for modeling seawater intrusion, specifically in the Woodville Karst Plain (WKP. A more comprehensive interpretation of parameter sensitivities, including the nonlinear relationship between simulations and parameters, and/or parameter interactions, is addressed in the global sensitivity analysis. The conduit parameters and boundary conditions are important to the simulations in the porous medium because of the dynamical exchanges between the two systems. The sensitivity study indicates that salinity and head simulations in the karst features, such as the conduit system and submarine springs, are critical for understanding seawater intrusion in a coastal karst aquifer. The evaluation of hydraulic conductivity sensitivity in the continuum SEAWAT model may be biased since the conduit flow velocity is not accurately calculated by Darcy's equation as a function of head difference and hydraulic conductivity. In addition, dispersivity is no longer an important parameter in an advection-dominated karst aquifer with a conduit system, compared to the sensitivity results in a porous medium aquifer. In the end, the extents of seawater intrusion are quantitatively evaluated and measured under different scenarios with the variabilities of
Groundwater flow analyses in preliminary site investigations. Modelling strategy and computer codes
International Nuclear Information System (INIS)
Taivassalo, V.; Koskinen, L.; Meling, K.
1994-02-01
The analyses of groundwater flow comprised a part of the preliminary site investigations which were carried out by Teollisuuden Voima Oy (TVO) for five areas in Finland during 1987 -1992. The main objective of the flow analyses was to characterize groundwater flow at the sites. The flow simulations were also used to identify and study uncertainties and inadequacies which are inherent in the results of earlier modelling phases. The flow analyses were performed for flow conditions similar to the present conditions. The modelling approach was based on the concept of an equivalent continuum. Each fracture zone and the rock matrix among the zones was, however, considered separately as a hydrogeologic unit. The numerical calculations were carried out with a computer code package, FEFLOW. The code is based upon the finite element method. With the code two- and one-dimensional elements can also be used by way of embedding them in a three-dimensional element mesh. A set of new algorithms was developed and employed to create element meshes for FEFLOW. The most useful program in the preliminary site investigations was PAAWI, which adds two-dimensional elements for fracture zones to an existing three-dimensional element mesh. The new algorithms reduced significantly the time required to create spatial discretization for complex geometries. Three element meshes were created for each site. The boundaries of the regional models coincide with those of the flow models. (55 refs., 40 figs., 1 tab.)
Energy Technology Data Exchange (ETDEWEB)
Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA
2017-05-01
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.
Lo, Steson; Andrews, Sally
2015-01-01
Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841
IATA-Bayesian Network Model for Skin Sensitization Data
U.S. Environmental Protection Agency — Since the publication of the Adverse Outcome Pathway (AOP) for skin sensitization, there have been many efforts to develop systematic approaches to integrate the...
Wind climate estimation using WRF model output: method and model sensitivities over the sea
DEFF Research Database (Denmark)
Hahmann, Andrea N.; Vincent, Claire Louise; Peña, Alfredo
2015-01-01
setup parameters. The results of the year-long sensitivity simulations show that the long-term mean wind speed simulated by the WRF model offshore in the region studied is quite insensitive to the global reanalysis, the number of vertical levels, and the horizontal resolution of the sea surface...... temperature used as lower boundary conditions. Also, the strength and form (grid vs spectral) of the nudging is quite irrelevant for the mean wind speed at 100 m. Large sensitivity is found to the choice of boundary layer parametrization, and to the length of the period that is discarded as spin-up to produce...
International Nuclear Information System (INIS)
Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi
2003-03-01
This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of
A model to estimate insulin sensitivity in dairy cows
Directory of Open Access Journals (Sweden)
Holtenius Kjell
2007-10-01
Full Text Available Abstract Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI is based on plasma concentrations of glucose, insulin and free fatty acids (FFA and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.
International Nuclear Information System (INIS)
Altman, S.J.; Ho, C.K.; Arnold, B.W.; McKenna, S.A.
1995-01-01
Unsaturated flow has been modeled through four cross-sections at Yucca Mountain, Nevada, for the purpose of determining groundwater particle travel times from the potential repository to the water table. This work will be combined with the results of flow modeling in the saturated zone for the purpose of evaluating the suitability of the potential repository under the criteria of 10CFR960. One criterion states, in part, that the groundwater travel time (GWTT) from the repository to the accessible environment must exceed 1,000 years along the fastest path of likely and significant radionuclide travel. Sensitivity analyses have been conducted for one geostatistical realization of one cross-section for the purpose of (1) evaluating the importance of hydrological parameters having some uncertainty and (2) examining conceptual models of flow by altering the numerical implementation of the conceptual model (dual permeability (DK) and the equivalent continuum model (ECM). Results of comparisons of the ECM and DK model are also presented in Ho et al
Energy Technology Data Exchange (ETDEWEB)
Woods, Jason D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Winkler, Jonathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2018-01-31
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to the interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.
Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco;
2017-01-01
Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the
Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.
Gul, R; Bernhard, S
2015-11-01
In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.
BWR Mark III containment analyses using a GOTHIC 8.0 3D model
International Nuclear Information System (INIS)
Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat
2015-01-01
Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences
Computational model for supporting SHM systems design: Damage identification via numerical analyses
Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei
2017-02-01
This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
Using Weather Data and Climate Model Output in Economic Analyses of Climate Change
Energy Technology Data Exchange (ETDEWEB)
Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.
2013-06-28
Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.
El Habachi, Aimad; Moissenet, Florent; Duprey, Sonia; Cheze, Laurence; Dumas, Raphaël
2015-07-01
Sensitivity analysis is a typical part of biomechanical models evaluation. For lower limb multi-body models, sensitivity analyses have been mainly performed on musculoskeletal parameters, more rarely on the parameters of the joint models. This study deals with a global sensitivity analysis achieved on a lower limb multi-body model that introduces anatomical constraints at the ankle, tibiofemoral, and patellofemoral joints. The aim of the study was to take into account the uncertainty of parameters (e.g. 2.5 cm on the positions of the skin markers embedded in the segments, 5° on the orientation of hinge axis, 2.5 mm on the origin and insertion of ligaments) using statistical distributions and propagate it through a multi-body optimisation method used for the computation of joint kinematics from skin markers during gait. This will allow us to identify the most influential parameters on the minimum of the objective function of the multi-body optimisation (i.e. the sum of the squared distances between measured and model-determined skin marker positions) and on the joint angles and displacements. To quantify this influence, a Fourier-based algorithm of global sensitivity analysis coupled with a Latin hypercube sampling is used. This sensitivity analysis shows that some parameters of the motor constraints, that is to say the distances between measured and model-determined skin marker positions, and the kinematic constraints are highly influencing the joint kinematics obtained from the lower limb multi-body model, for example, positions of the skin markers embedded in the shank and pelvis, parameters of the patellofemoral hinge axis, and parameters of the ankle and tibiofemoral ligaments. The resulting standard deviations on the joint angles and displacements reach 36° and 12 mm. Therefore, personalisation, customisation or identification of these most sensitive parameters of the lower limb multi-body models may be considered as essential.
Directory of Open Access Journals (Sweden)
Nataša Štambuk-Cvitanović
1999-12-01
Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.
Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.
2012-12-01
example, regional stakeholders have identified a need to understand the cost and effectiveness of potential regional policies to upgrade building energy codes and equipment standards to reduce carbon emissions and save energy. This presentation discusses the application and results of fractional factorial analyses and related methods that we have used to determine the sensitivity of key benefits and costs of regional building codes and equipment efficiency standards at the state level, while also reducing the dimensionality of the downstream uncertainty characterization and propagation problem. The presentation analyzes alternative policies for regional building standards in the context of uncertain population and economic growth, carbon scenarios that represent both future atmospheric carbon loading and national emissions polices, and regional climate changes projected by a range of climate models.
Sensitivity of Population Size Estimation for Violating Parametric Assumptions in Log-linear Models
Directory of Open Access Journals (Sweden)
Gerritse Susanna C.
2015-09-01
Full Text Available An important quality aspect of censuses is the degree of coverage of the population. When administrative registers are available undercoverage can be estimated via capture-recapture methodology. The standard approach uses the log-linear model that relies on the assumption that being in the first register is independent of being in the second register. In models using covariates, this assumption of independence is relaxed into independence conditional on covariates. In this article we describe, in a general setting, how sensitivity analyses can be carried out to assess the robustness of the population size estimate. We make use of log-linear Poisson regression using an offset, to simulate departure from the model. This approach can be extended to the case where we have covariates observed in both registers, and to a model with covariates observed in only one register. The robustness of the population size estimate is a function of implied coverage: as implied coverage is low the robustness is low. We conclude that it is important for researchers to investigate and report the estimated robustness of their population size estimate for quality reasons. Extensions are made to log-linear modeling in case of more than two registers and the multiplier method
DEFF Research Database (Denmark)
Abadal, G.; Davis, Zachary James; Helbo, Bjarne
2001-01-01
A simple linear electromechanical model for an electrostatically driven resonating cantilever is derived. The model has been developed in order to determine dynamic quantities such as the capacitive current flowing through the cantilever-driver system at the resonance frequency, and it allows us...... to calculate static magnitudes such as position and voltage of collapse or the voltage versus deflection characteristic. The model is used to demonstrate the theoretical sensitivity on the attogram scale of a mass sensor based on a nanometre-scale cantilever, and to analyse the effect of an extra feedback loop...
Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes
Directory of Open Access Journals (Sweden)
Stoyan Stoyanov
2009-08-01
Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.
Sensitivity-based research prioritization through stochastic characterization modeling
DEFF Research Database (Denmark)
Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter
2017-01-01
to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...
International Nuclear Information System (INIS)
Roy, L.G.; Roy, R.; Desrochers, G.E.; Vaillancourt, C.; Chartier, I.
2008-01-01
There are uncertainties associated with the use of hydrological models. This study aims to analyse one source of uncertainty associated with hydrological modeling, particularly in the context of climate change studies on water resources. Additional intent of this study is to compare the ability of some meteorological data sources, used in conjunction with an hydrological model, to reproduce the hydrologic regime of a watershed. A case study on a watershed of south-western Quebec, Canada using five different sources of meteorological data as input to an offline hydrological model are presented in this paper. Data used came from weather stations, NCEP reanalysis, ERA40 reanalysis and two Canadian Regional Climate Model (CRCM) runs driven by NCEP and ERA40 reanalysis, providing atmospheric driving boundary conditions to this limited-area climate model. To investigate the sensitivity of simulated streamflow to different sources of meteorological data, we first calibrated the hydrological model with each of the meteorological data sets over the 1961-1980 period. The five different sets of parameters of the hydrological model were then used to simulate streamflow of the 1981-2000 validation period with the five meteorological data sets as inputs. The 25 simulated streamflow series have been compared to the observed streamflow of the watershed. The five meteorological data sets do not have the same ability, when used with the hydrological model, to reproduce streamflow. Our results show also that the hydrological model parameters used may have an important influence on results such as water balance, but it is linked with the differences that may have in the characteristics of the meteorological data used. For climate change impacts assessments on water resources, we have found that there is an uncertainty associated with the meteorological data used to calibrate the model. For expected changes on mean annual flows of the Chateauguay River, our results vary from a small
Wöhr, Markus
2014-06-01
Autism spectrum disorders (ASD) are a class of neurodevelopmental disorders characterized by persistent deficits in social behavior and communication across multiple contexts, together with repetitive patterns of behavior, interests, or activities. The high concordance rate between monozygotic twins supports a strong genetic component. Among the most promising candidate genes for ASD is the SHANK gene family, including SHANK1, SHANK2 (ProSAP1), and SHANK3 (ProSAP2). SHANK genes are therefore important candidates for modeling ASD in mice and various genetic models were generated within the last few years. As the diagnostic criteria for ASD are purely behaviorally defined, the validity of mouse models for ASD strongly depends on their behavioral phenotype. Behavioral phenotyping is therefore a key component of the current translational approach and requires sensitive behavioral test paradigms with high relevance to each diagnostic symptom category. While behavioral phenotyping assays for social deficits and repetitive patterns of behavior, interests, or activities are well-established, the development of sensitive behavioral test paradigms to assess communication deficits in mice is a daunting challenge. Measuring ultrasonic vocalizations (USV) appears to be a promising strategy. In the first part of the review, an overview on the different types of mouse USV and their communicative functions will be provided. The second part is devoted to studies on the emission of USV in Shank mouse models for ASD. Evidence for communication deficits was obtained in Shank1, Shank2, and Shank3 genetic mouse models for ASD, often paralleled by behavioral phenotypes relevant to social deficits seen in ASD. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mathematical modeling of a fluidized bed rice husk gasifier: Part 2 - Model sensitivity
Energy Technology Data Exchange (ETDEWEB)
Mansaray, K.G.; Ghaly, A.E.; Al-Taweel, A.M.; Hamdullahpur, F.; Ugursal, V.I.
2000-02-01
The performance of two thermodynamic models (one-compartment and two-compartment models), developed for fluidized bed gasification of rice husk, was analyzed and compared in terms of their predictive capabilities of the product gas composition. The two-compartment model was the most adequate to simulate the fluidized bed gasification of rice husk, since the complex hydrodynamics present in the fluidized bed gasifier were taken into account. Therefore, the two-compartment model was tested under a wide range of parameters, including bed height, fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and rice husk moisture content. The model sensitivity analysis showed that changes in bed height had a significant effect on the reactor temperatures, but only a small effect on the gas composition, higher heating value, and overall carbon conversion. The fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and moisture content in rice husk had dramatic effects on the gasifier performance. However, the model was more sensitive to variations in the equivalence ratio and oxygen concentration in the fluidizing gas. (Author)
Mathematical modeling of a fluidized bed rice husk gasifier: Part 2 -- Model sensitivity
Energy Technology Data Exchange (ETDEWEB)
Mansaray, K.G.; Ghaly, A.E.; Al-Taweel, A.M.; Hamdullahpur, F.; Ugursal, V.I.
2000-03-01
The performance of two thermodynamic models (one-compartment and two-compartment models), developed for fluidized bed gasification of rice husk, was analyzed and compared in terms of their predictive capabilities of the product gas composition. The two-compartment model was the most adequate to simulate the fluidized bed gasification of rice husk, since the complex hydrodynamics present in the fluidized bed gasifier were taken into account. Therefore, the two-compartment model was tested under a wide range of parameters, including bed height, fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and rice husk moisture content. The model sensitivity analysis showed that changes in bed height had a significant effect on the reactor temperatures, but only a small effect on the gas composition, higher heating value, and overall carbon conversion. The fluidization velocity, equivalence ratio, oxygen concentration in the fluidizing gas, and moisture content in rice husk had dramatic effects on the gasifier performance. However, the model was more sensitive to variations in the equivalence ratio and oxygen concentration in the fluidizing gas.
International Nuclear Information System (INIS)
Hasselmann, K.; Hasselmann, S.; Giering, R.; Ocana, V.; Storch, H. von
1997-01-01
A structurally highly simplified, globally integrated coupled climate-economic costs model SIAM (Structural Integrated Assessment Model) is used to compute optimal paths of global CO 2 emissions that minimize the net sum of climate damage and mitigation costs. It studies the sensitivity of the computed optimal emission paths. The climate module is represented by a linearized impulse-response model calibrated against a coupled ocean-atmosphere general circulation climate model and a three-dimensional global carbon-cycle model. The cost terms are presented by expressions designed with respect to input assumptions. These include the discount rates for mitigation and damage costs, the inertia of the socio-economic system, and the dependence of climate damages on the changes in temperature and the rate of change of temperature. Different assumptions regarding these parameters are believed to cause the marked divergences of existing cost-benefit analyses. The long memory of the climate system implies that very long time horizons of several hundred years need to be considered to optimize CO 2 emissions on time scales relevant for a policy of sustainable development. Cost-benefit analyses over shorter time scales of a century or two can lead to dangerous underestimates of the long term climate impact of increasing greenhouse-gas emissions. To avert a major long term global warming, CO 2 emissions need to be reduced ultimately to very low levels. This may be done slowly but should not be interpreted as providing a time cushion for inaction: the transition becomes more costly the longer the necessary mitigation policies are delayed. However, the long time horizon provides adequate flexibility for later adjustments. Short term energy conservation alone is insufficient and can be viewed only as a useful measure in support of the necessary long term transition to carbon-free energy technologies. 46 refs., 9 figs., 2 tabs
Zhu, Rui; Zander, Thomas; Dreischarf, Marcel; Duda, Georg N; Rohlmann, Antonius; Schmidt, Hendrik
2013-04-26
Mostly simplified loads were used in biomechanical finite element (FE) studies of the spine because of a lack of data on muscular physiological loading. Inverse static (IS) models allow the prediction of muscle forces for predefined postures. A combination of both mechanical approaches - FE and IS - appears to allow a more realistic modeling. However, it is unknown what deviations are to be expected when muscle forces calculated for models with rigid vertebrae and fixed centers of rotation, as generally found in IS models, are applied to a FE model with elastic vertebrae and discs. The aim of this study was to determine the effects of these disagreements. Muscle forces were estimated for 20° flexion and 10° extension in an IS model and transferred to a FE model. The effects of the elasticity of bony structures (rigid vs. elastic) and the definition of the center of rotation (fixed vs. non-fixed) were quantified using the deviation of actual intervertebral rotation (IVR) of the FE model and the targeted IVR from the IS model. For extension, the elasticity of the vertebrae had only a minor effect on IVRs, whereas a non-fixed center of rotation increased the IVR deviation on average by 0.5° per segment. For flexion, a combination of the two parameters increased IVR deviation on average by 1° per segment. When loading FE models with predicted muscle forces from IS analyses, the main limitations in the IS model - rigidity of the segments and the fixed centers of rotation - must be considered. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Y. Tang
2007-01-01
Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.
Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.
Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda
2015-01-01
The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.
Energy Technology Data Exchange (ETDEWEB)
Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)
2016-03-11
Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.
A sensitivity driven meta-model optimisation tool for hydrological models
Oppel, Henning; Schumann, Andreas
2017-04-01
The calibration of rainfall-runoff-models containing a high number of parameters can be done readily by the use of different calibration methods and algorithms. Monte-Carlo Methods, gradient based search algorithms and others are well-known and established in hydrological sciences. Thus, the calibration of a model for a desired application is not a challenging task, but retaining regional comparability and process integrity is, due to the equifinality-problem, a prevailing topic. This set of issues is mainly a result of the overdeterminaton given the high number of parameters in rainfall-runoff-models, where different parameters are affecting the same facet of model performance (i.e. runoff volume, variance and timing). In this study a calibration strategy is presented which considers model sensitivity as well as parameter interaction and different criteria of model performance. At first a range of valid values for each model parameter was defined and the individual effect on model performance within the defined parameter range was evaluated. By use of the gained knowledge a meta-model, lumping different parameters affecting the same facet of model performance, was established. Hereafter, the parsimonious meta-model, where each parameter is assigned to a nearly disjoint facet of model performance is optimized. By retransformation of the lumped parameters to the original model, a parametrisation for the original model is obtained. An application of this routine to a set of watersheds in the eastern part of Germany displays the benefits of the routine. Results of the meta-parametrised model are compared to parametrisations obtained from common calibration routines in a validation study and process orientated numerical experiment.
A Probabilistic Model for Sequence Alignment with Context-Sensitive Indels
Hickey, Glenn; Blanchette, Mathieu
Probabilistic approaches for sequence alignment are usually based on pair Hidden Markov Models (HMMs) or Stochastic Context Free Grammars (SCFGs). Recent studies have shown a significant correlation between the content of short indels and their flanking regions, which by definition cannot be modelled by the above two approaches. In this work, we present a context-sensitive indel model based on a pair Tree-Adjoining Grammar (TAG), along with accompanying algorithms for efficient alignment and parameter estimation. The increased precision and statistical power of this model is shown on simulated and real genomic data. As the cost of sequencing plummets, the usefulness of comparative analysis is becoming limited by alignment accuracy rather than data availability. Our results will therefore have an impact on any type of downstream comparative genomics analyses that rely on alignments. Fine-grained studies of small functional regions or disease markers, for example, could be significantly improved by our method. The implementation is available at http://www.mcb.mcgill.ca/~blanchem/software.html
Global sensitivity analysis of thermo-mechanical models in numerical weld modelling
International Nuclear Information System (INIS)
Petelet, M.
2007-10-01
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range ! This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Directory of Open Access Journals (Sweden)
U.N. Band
Full Text Available Abstract A transition element is developed for the local global analysis of laminated composite beams. It bridges one part of the domain modelled with a higher order theory and other with a 2D mixed layerwise theory (LWT used at critical zone of the domain. The use of developed transition element makes the analysis for interlaminar stresses possible with significant accuracy. The mixed 2D model incorporates the transverse normal and shear stresses as nodal degrees of freedom (DOF which inherently ensures continuity of these stresses. Non critical zones are modelled with higher order equivalent single layer (ESL theory leading to the global mesh with multiple models applied simultaneously. Use of higher order ESL in non critical zones reduces the total number of elements required to map the domain. A substantial reduction in DOF as compared to a complete 2D mixed model is obvious. This computationally economical multiple modelling scheme using the transition element is applied to static and free vibration analyses of laminated composite beams. Results obtained are in good agreement with benchmarks available in literature.
A chip-level modeling approach for rail span collapse and survivability analyses
International Nuclear Information System (INIS)
Marvis, D.G.; Alexander, D.R.; Dinger, G.L.
1989-01-01
A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout
Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness
International Nuclear Information System (INIS)
Dawson, P.; Paton, A.A.; Fleischer, C.C.
1990-01-01
This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)
Sensitivity analysis of the evaporation module of the E-DiGOR model
AYDIN, Mehmet; KEÇECİOĞLU, Suzan Filiz
2010-01-01
Sensitivity analysis of the soil-water-evaporation module of the E-DiGOR (Evaporation and Drainage investigations at Ground of Ordinary Rainfed-areas) model is presented. The model outputs were generated using measured climatic data and soil properties. The first-order sensitivity formulas were derived to compute relative sensitivity coefficients. A change in the net solar radiation significantly affected potential evaporation from bare soils estimated by the Penman-Monteith equation. The se...
Prior sensitivity analysis in default Bayesian structural equation modeling
van Erp, S.J.; Mulder, J.; Oberski, Daniel L.
2018-01-01
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models while solving some of the issues often encountered in classical maximum likelihood (ML) estimation, such as nonconvergence and inadmissible solutions. An important
Energy Technology Data Exchange (ETDEWEB)
Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA
2017-04-01
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.
Energy Technology Data Exchange (ETDEWEB)
Marchand, E
2007-12-15
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Sensitivity of fire behavior simulations to fuel model variations
Lucy A. Salazar
1985-01-01
Stylized fuel models, or numerical descriptions of fuel arrays, are used as inputs to fire behavior simulation models. These fuel models are often chosen on the basis of generalized fuel descriptions, which are related to field observations. Site-specific observations of fuels or fire behavior in the field are not readily available or necessary for most fire management...
Directory of Open Access Journals (Sweden)
Reinhard Schinke
2016-11-01
Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.
DEFF Research Database (Denmark)
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis
Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.
2014-12-01
Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.
A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories
Duvvuri, Sri Devi; Gruca, Thomas S.
2010-01-01
Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…
Morgan, Jeff
2011-01-01
Cultural sensitivity theory is the study of how individuals relate to cultural difference. Using literature to help students prepare for study abroad, instructors could analyze character and trace behavior through a model of cultural sensitivity. Milton J. Bennett has developed such an instrument, The Developmental Model of Intercultural…
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberger, F.; Saltelli, A.; Pagano, A.
2007-05-01
In this paper, we discuss a joint approach to calibration and uncertainty estimation for hydrologic systems that combines a top-down, data-based mechanistic (DBM) modelling methodology; and a bottom-up, reductionist modelling methodology. The combined approach is applied to the modelling of the River Hodder catchment in North-West England. The top-down DBM model provides a well identified, statistically sound yet physically meaningful description of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. These characteristics are defined inductively from the data without prior assumptions about the model structure, other than it is within the generic class of nonlinear differential-delay equations. The bottom-up modelling is developed using the TOPMODEL, whose structure is assumed a priori and is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters. The subsequent exercises in calibration and validation, performed with Generalized Likelihood Uncertainty Estimation (GLUE), are carried out in the light of the GSA and DBM analyses. This allows for the pre-calibration of the the priors used for GLUE, in order to eliminate dynamical features of the TOPMODEL that have little effect on the model output and would be rejected at the structure identification phase of the DBM modelling analysis. In this way, the elements of meaningful subjectivity in the GLUE approach, which allow the modeler to interact in the modelling process by constraining the model to have a specific form prior to calibration, are combined with other more objective, data-based benchmarks for the final uncertainty estimation. GSA plays a major role in building a bridge between the hypothetico-deductive (bottom-up) and inductive (top-down) approaches and helps to improve the
Energy Technology Data Exchange (ETDEWEB)
Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory
2015-03-01
The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen
2014-01-01
Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.
Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling
DEFF Research Database (Denmark)
Thomsen, Bente Dahl
2006-01-01
Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before...... the inspiration to deliver the form finally presents itself. This was the outlet for our plan to devise a tool for analysing works and the practical development of forms. The tool is a set of cards with suggestions for investigations that may assist the modeller in identifying the weaknesses of the form......, or convince him-/herself about its strengths. The cards also contain aesthetical reflections that may be of inspiration in the development of the form....
Directory of Open Access Journals (Sweden)
Yuan Liu
2016-10-01
Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.
Estimating required information size by quantifying diversity in random-effects model meta-analyses
DEFF Research Database (Denmark)
Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper
2009-01-01
BACKGROUND: There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from...... an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared...
Benchmarking sensitivity of biophysical processes to leaf area changes in land surface models
Forzieri, Giovanni; Duveiller, Gregory; Georgievski, Goran; Li, Wei; Robestson, Eddy; Kautz, Markus; Lawrence, Peter; Ciais, Philippe; Pongratz, Julia; Sitch, Stephen; Wiltshire, Andy; Arneth, Almut; Cescatti, Alessandro
2017-04-01
Land surface models (LSM) are widely applied as supporting tools for policy-relevant assessment of climate change and its impact on terrestrial ecosystems, yet knowledge of their performance skills in representing the sensitivity of biophysical processes to changes in vegetation density is still limited. This is particularly relevant in light of the substantial impacts on regional climate associated with the changes in leaf area index (LAI) following the observed global greening. Benchmarking LSMs on the sensitivity of the simulated processes to vegetation density is essential to reduce their uncertainty and improve the representation of these effects. Here we present a novel benchmark system to assess model capacity in reproducing land surface-atmosphere energy exchanges modulated by vegetation density. Through a collaborative effort of different modeling groups, a consistent set of land surface energy fluxes and LAI dynamics has been generated from multiple LSMs, including JSBACH, JULES, ORCHIDEE, CLM4.5 and LPJ-GUESS. Relationships of interannual variations of modeled surface fluxes to LAI changes have been analyzed at global scale across different climatological gradients and compared with satellite-based products. A set of scoring metrics has been used to assess the overall model performances and a detailed analysis in the climate space has been provided to diagnose possible model errors associated to background conditions. Results have enabled us to identify model-specific strengths and deficiencies. An overall best performing model does not emerge from the analyses. However, the comparison with other models that work better under certain metrics and conditions indicates that improvements are expected to be potentially achievable. A general amplification of the biophysical processes mediated by vegetation is found across the different land surface schemes. Grasslands are characterized by an underestimated year-to-year variability of LAI in cold climates
Biosphere Modeling and Analyses in Support of Total System Performance Assessment
International Nuclear Information System (INIS)
Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.
2002-01-01
The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities
Biosphere Modeling and Analyses in Support of Total System Performance Assessment
International Nuclear Information System (INIS)
Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt
2001-01-01
The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities
An approach to measure parameter sensitivity in watershed hydrologic modeling
U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...
Combining Two Methods of Global Sensitivity Analysis to Investigate MRSA Nasal Carriage Model.
Jarrett, Angela M; Cogan, N G; Hussaini, M Y
2017-10-01
We apply two different sensitivity techniques to a model of bacterial colonization of the anterior nares to better understand the dynamics of Staphylococcus aureus nasal carriage. Specifically, we use partial rank correlation coefficients to investigate sensitivity as a function of time and identify a reduced model with fewer than half of the parameters of the full model. The reduced model is used for the calculation of Sobol' indices to identify interacting parameters by their additional effects indices. Additionally, we found that the model captures an interesting characteristic of the biological phenomenon related to the initial population size of the infection; only two parameters had any significant additional effects, and these parameters have biological evidence suggesting they are connected but not yet completely understood. Sensitivity is often applied to elucidate model robustness, but we show that combining sensitivity measures can lead to synergistic insight into both model and biological structures.
Manzo, Ciro; Bresciani, Mariano; Giardino, Claudia; Braga, Federica; Bassani, Cristiana
2015-04-01
In this work, a variance-based procedure was applied to study the sensitivity of a Case-2 bio-optical model which simulates the water reflectance of three Italian lakes - Garda, Mantua and Trasimeno - with different trophic conditions by analysing the main effect of single WQPs and their interactions. The water reflectance was simulated according to a four-components model [Brando and Dekker 2003] considering the SIOPs typical of each lake and the spectral characteristics of three optical sensors, on-board of Landsat-8, Sentinel-2 and Sentinel-3, which can be potentially applied for lakes. Lakes Garda, Mantua and Trasimeno were selected as representative of different trophic levels; for these lakes long-term data of in situ measurements on water quality characteristics were also available. The bio-optical analytical model simulated the subsurface irradiance reflectance R(0-, λ) as a function of absorption and backscattering coefficients (a(λ), bb(λ)) given as a sum of the contribution of water and the water quality parameters. The sensitivity indices of water reflectance for three water types/trophic conditions were calculated decomposing output variance (V) in partial variances which represent the share of V that is explained by the bio-optical model inputs [Saltelli et al., 2010]. The results provide important information relating the sensitivity of the new generation sensors to different trophic statuses, and in particular confirmed that Sentinel-3 water reflectance is sensitive to WQPs in all the trophic conditions investigated.
Directory of Open Access Journals (Sweden)
Olotu Yahaya
2014-07-01
Full Text Available Analyses of runoff- sediment measurement and evaluation using automated and convectional runoff-meters was carried out at Meteorological and Hydrological Station of Auchi Polytechnic, Auchi using two runoff plots (ABCDa and EFGHm of area 2m 2 each, depth 0.26 m and driven into the soil to the depth of 0.13m. Runoff depths and intensities were measured from each of the positioned runoff plot. Automated runoff-meter has a measuring accuracy of ±0.001l/±0.025 mm and rainfall depth-intensity was measured using tipping-bucket rainguage during the period of 14-month of experimentation. Minimum and maximum rainfall depths of 1.2 and 190.3 mm correspond to measured runoff depths (MRo of 0.0 mm for both measurement approaches and 60.4 mm and 48.9 mm respectively. Automated runoffmeter provides precise, accurate and instantaneous result over the convectional measurement of surface runoff. Runoff measuring accuracy for automated runoff-meter from the plot (ABCDa produces R 2 = 0.99; while R 2 = 0.96 for manual evaluation in plot (EFGHm. WEPP and SWAT models were used to simulate the obtained hydrological variables from the applied measurement mechanisms. The outputs of sensitivity simulation analysis indicate that data from automated measuring systems gives a better modelling index and such could be used for running robust runoff-sediment predictive modelling technique under different reservoir sedimentation and water management scenarios.
Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.
2015-12-01
The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR
Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia
Directory of Open Access Journals (Sweden)
Peter L. Oliver
2011-01-01
Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.
Directory of Open Access Journals (Sweden)
Karen Block
2017-06-01
Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Energy Technology Data Exchange (ETDEWEB)
Greg J. Shott, Vefa Yucel, Lloyd Desotell
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
International Nuclear Information System (INIS)
Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli
2007-01-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory
Hindcasting to measure ice sheet model sensitivity to initial states
Directory of Open Access Journals (Sweden)
A. Aschwanden
2013-07-01
Full Text Available Validation is a critical component of model development, yet notoriously challenging in ice sheet modeling. Here we evaluate how an ice sheet system model responds to a given forcing. We show that hindcasting, i.e. forcing a model with known or closely estimated inputs for past events to see how well the output matches observations, is a viable method of assessing model performance. By simulating the recent past of Greenland, and comparing to observations of ice thickness, ice discharge, surface speeds, mass loss and surface elevation changes for validation, we find that the short term model response is strongly influenced by the initial state. We show that the thermal and dynamical states (i.e. the distribution of internal energy and momentum can be misrepresented despite a good agreement with some observations, stressing the importance of using multiple observations. In particular we identify rates of change of spatially dense observations as preferred validation metrics. Hindcasting enables a qualitative assessment of model performance relative to observed rates of change. It thereby reduces the number of admissible initial states more rigorously than validation efforts that do not take advantage of observed rates of change.
D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -
Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.
2011-09-01
In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to
Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3
International Nuclear Information System (INIS)
Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.
1975-06-01
The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)
Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4
International Nuclear Information System (INIS)
Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.
1975-06-01
The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)
Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E
2017-12-01
1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.
The application of sensitivity analysis to models of large scale physiological systems
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
Directory of Open Access Journals (Sweden)
L. A. Bastidas
2016-09-01
Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION
Energy Technology Data Exchange (ETDEWEB)
VONGARGEN BH
2009-11-03
The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon
Directory of Open Access Journals (Sweden)
L. Meng
2012-07-01
Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH_{4} yr^{−1} (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH_{4} yr^{−1}. Tropical wetlands contributed 201 Tg CH_{4} yr^{−1}, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH_{4} yr^{−1}. However, sensitivity studies show a large range (150–346 Tg CH_{4} yr^{−1} in predicted global methane emissions (excluding emissions from rice paddies. The large range is
Dzhambov, Angel M; Dimitrova, Donka D; Dimitrakova, Elena D
2014-01-01
Many women are exposed daily to high levels of occupational and residential noise, so the effect of noise exposure on pregnancy should be considered because noise affects both the fetus and the mother herself. However, there is a controversy in the literature regarding the adverse effects of occupational and residential noise on pregnant women and their fetuses. The aim of this study was to conduct systematic review of previously analyzed studies, to add additional information omitted in previous reviews and to perform meta-analyses on the effects of noise exposure on pregnancy, birth outcomes and fetal development. Previous reviews and meta-analyses on the topic were consulted. Additionally, a systematic search in MEDLINE, EMBASE and Internet was carried out. Twenty nine studies were included in the meta-analyses. Quality effects meta-analytical model was applied. Women exposed to high noise levels (in most of the studies ≥ 80 dB) during pregnancy are at a significantly higher risk for having small-for-gestational-age newborn (RR = 1.19, 95% CI: 1.03, 1.38), gestational hypertension (RR = 1.27, 95% CI: 1.03, 1.58) and infant with congenital malformations (RR = 1.47, 95% CI: 1.21, 1.79). The effect was not significant for preeclampsia, perinatal death, spontaneous abortion and preterm birth. The results are consistent with previous findings regarding a higher risk for small-for-gestational-age. They also highlight the significance of residential and occupational noise exposure for developing gestational hypertension and especially congenital malformations.
Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus
Directory of Open Access Journals (Sweden)
Rousvoal Sylvie
2008-08-01
Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.
Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.
2015-04-01
The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task
GCR Environmental Models I: Sensitivity Analysis for GCR Environments
Slaba, Tony C.; Blattnig, Steve R.
2014-01-01
Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.
Combined calibration and sensitivity analysis for a water quality model of the Biebrza River, Poland
Perk, van der M.; Bierkens, M.F.P.
1995-01-01
A study was performed to quantify the error in results of a water quality model of the Biebrza River, Poland, due to uncertainties in calibrated model parameters. The procedure used in this study combines calibration and sensitivity analysis. Finally,the model was validated to test the model
A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.
Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P
2018-04-01
Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.
Directory of Open Access Journals (Sweden)
Young-Chan Noh
2016-07-01
Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.
Directory of Open Access Journals (Sweden)
Jiang Xiangwen
2015-06-01
Full Text Available Based on computational fluid dynamics (CFD method, electromagnetic high-frequency method and surrogate model optimization techniques, an integration design method about aerodynamic/stealth has been established for helicopter rotor. The developed integration design method is composed of three modules: integrated grids generation (the moving-embedded grids for CFD solver and the blade grids for radar cross section (RCS solver are generated by solving Poisson equations and folding approach, aerodynamic/stealth solver (the aerodynamic characteristics are simulated by CFD method based upon Navier–Stokes equations and Spalart–Allmaras (S–A turbulence model, and the stealth characteristics are calculated by using a panel edge method combining the method of physical optics (PO, equivalent currents (MEC and quasi-stationary (MQS, and integrated optimization analysis (based upon the surrogate model optimization technique with full factorial design (FFD and radial basis function (RBF, an integrated optimization analyses on aerodynamic/stealth characteristics of rotor are conducted. Firstly, the scattering characteristics of the rotor with different blade-tip swept and twist angles have been carried out, then time–frequency domain grayscale with strong scattering regions of rotor have been given. Meanwhile, the effects of swept-tip and twist angles on the aerodynamic characteristic of rotor have been performed. Furthermore, by choosing suitable object function and constraint condition, the compromised design about swept and twist combinations of rotor with high aerodynamic performances and low scattering characteristics has been given at last.
Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling
Directory of Open Access Journals (Sweden)
Sung-Chien Lin
2014-07-01
Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.
Directory of Open Access Journals (Sweden)
Samy Ismail Elmahdy
2016-01-01
Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.
Energy Technology Data Exchange (ETDEWEB)
Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)
2014-07-01
The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document
Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy
International Nuclear Information System (INIS)
Moeller, Bernd
2011-01-01
Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Directory of Open Access Journals (Sweden)
N. Sczygiol
2007-12-01
Full Text Available Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM. The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods. Characteristic feature of the global methods is necessity to conduct a series of simulations applying the investigated model with appropriately chosen model parameters. The advantage of Morris method is possibility to reduce the number of necessary simulations. Results of the presented work allow to answer the question how generic sensitivity analysis results are, particularly if sensitivity analysis results depend only on model characteristics and not on things such as density of the finite element mesh or shape of the region. Results of this research allow to conclude that sensitivity analysis with use of Morris method depends only on characteristic of the investigated model.
Development and application of model RAIA uranium on-line analyser
International Nuclear Information System (INIS)
Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru
1999-01-01
The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management
A model for analysing factors which may influence quality management procedures in higher education
Directory of Open Access Journals (Sweden)
Cătălin MAICAN
2015-12-01
Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.
Developing a system dynamics model to analyse environmental problem in construction site
Haron, Fatin Fasehah; Hawari, Nurul Nazihah
2017-11-01
This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
International Nuclear Information System (INIS)
Reyes F, M. C.; Del Valle G, E.; Gomez T, A. M.; Sanchez E, V.
2015-09-01
A methodology was implemented to carry out a sensitivity and uncertainty analysis for cross sections used in a coupled model for Trace/Parcs in a transient of control rod fall of a BWR-5. A model of the reactor core for the neutronic code Parcs was used, in which the assemblies located in the core are described. Thermo-hydraulic model in Trace was a simple model, where only a component type Chan was designed to represent all the core assemblies, which it was within a single vessel and boundary conditions were established. The thermo-hydraulic part was coupled with the neutron part, first for the steady state and then a transient of control rod fall was carried out for the sensitivity and uncertainty analysis. To carry out the analysis of cross sections used in the coupled model Trace/Parcs during the transient, the Probability Density Functions for 22 parameters selected from the total of neutronic parameters that use Parcs were generated, obtaining 100 different cases for the coupled model Trace/Parcs, each one with a database of different cross sections. All these cases were executed with the coupled model, obtaining in consequence 100 different output files for the transient of control rod fall doing emphasis in the nominal power, for which an uncertainty analysis was realized at the same time generate the band of uncertainty. With this analysis is possible to observe the ranges of results of the elected responses varying the selected uncertainty parameters. The sensitivity analysis complements the uncertainty analysis, identifying the parameter or parameters with more influence on the results and thus focuses on these parameters in order to better understand their effects. Beyond the obtained results, because is not a model with real operation data, the importance of this work is to know the application of the methodology to carry out the sensitivity and uncertainty analyses. (Author)
Thomas, R. Q.; Williams, M.
2014-12-01
Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has
van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A
2016-01-01
The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development. Copyright © 2015. Published by Elsevier Inc.
Healthy volunteers can be phenotyped using cutaneous sensitization pain models
DEFF Research Database (Denmark)
Werner, Mads U; Petersen, Karin; Rowbotham, Michael C
2013-01-01
Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repe...
Sensitivity Analysis in Structural Equation Models: Cases and Their Influence
Pek, Jolynn; MacCallum, Robert C.
2011-01-01
The detection of outliers and influential observations is routine practice in linear regression. Despite ongoing extensions and development of case diagnostics in structural equation models (SEM), their application has received limited attention and understanding in practice. The use of case diagnostics informs analysts of the uncertainty of model…
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Land Building Models: Uncertainty in and Sensitivity to Input Parameters
2013-08-01
Vicksburg, MS: US Army Engineer Research and Development Center. An electronic copy of this CHETN is available from http://chl.erdc.usace.army.mil/chetn...Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area ( LCA ) Comprehensive
A duopoly model with heterogeneous congestion-sensitive customers
Mandjes, M.R.H.; Timmer, Judith B.
2003-01-01
This paper analyzes a model with multiple firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards `congestion' (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the firms,
A duopoly model with heterogeneous congestion-sensitive customers.
Mandjes, M.R.H.; Timmer, J.
2007-01-01
Abstract This paper analyzes a model with two firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards ‘congestion’ (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the
A duopoly model with heterogeneous congestion-sensitive customers
Mandjes, M.R.H.; Timmer, Judith B.
This paper analyzes a model with two firms (providers), and two classes of customers. These customers classes are characterized by their attitude towards ‘congestion’ (caused by other customers using the same resources); a firm is selected on the basis of both the prices charged by the firms, and
Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling
2008-01-01
Morgan Kaufmann, 1988. [24] J. Pearl, Causality: Models, Reasoning, and Inference, Cambridge University Press, 2000. [25] J. Piaget , Piaget’s theory ...Gopnik, C. Glymour, D. M. Sobel, L. E. Schulz, T. Kushnir, D. Danks, A theory of causal learning in children: Causal maps and Bayes nets, Psychological
Modelling flow through unsaturated zones: Sensitivity to unsaturated ...
Indian Academy of Sciences (India)
M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22
water flow through unsaturated zones and study the effect of unsaturated soil parameters on water movement during different processes such as gravity drainage and infiltration. 2. Modelling Richards equation for vertical unsaturated flow. For one-dimensional vertical flow in unsaturated soil, the pressure-head based ...
International Nuclear Information System (INIS)
Williams, D. G.
2003-01-01
This paper explores the trends over 1997-2001 in my baseline simulation analysis of the sufficiency of electric utilities' funds to eventually decommission the nation's nuclear power plants. Further, for 2001, I describe the utilities' funding adequacy results obtained using scenario and sensitivity analyses, respectively. In this paper, I focus more on the wide variability observed in these adequacy measures among utilities than on the results for the ''average'' utility in the nuclear industry. Only individual utilities, not average utilities -- often used by the nuclear industry to represent its funding adequacy -- will decommission their nuclear plants. Industry-wide results tend to mask the varied results for individual utilities. This paper shows that over 1997-2001, the variability of my baseline decommissioning funding adequacy measures (in percentages) for both utility fund balances and current contributions has remained very large, reflected in the sizable ranges and frequency distributions of these percentages. The relevance of this variability for nuclear decommissioning funding adequacy is, of course, focused more on those utilities that show below ideal balances and contribution levels. Looking backward, 42 of 67 utility fund (available) balances, in 2001, were above (and 25 below) their ideal baseline levels; in 1997, 42 of 76 were above (and 34 below) ideal levels. Of these, many utility balances were far above, and many far below, such ideal levels. The problem of certain utilities continuing to show balances much below ideal persists even with increases in the adequacy of ''average'' utility balances
Directory of Open Access Journals (Sweden)
Vieira Verónica M
2012-08-01
Full Text Available Abstract Background Although daily emergency department (ED data is a source of information that often includes residence, its potential for space-time analyses at the individual level has not been fully explored. We propose that ED data collected for surveillance purposes can also be used to inform spatial and temporal patterns of disease using generalized additive models (GAMs. This paper describes the methods for adapting GAMs so they can be applied to ED data. Methods GAMs are an effective approach for modeling spatial and temporal distributions of point-wise data, producing smoothed surfaces of continuous risk while adjusting for confounders. In addition to disease mapping, the method allows for global and pointwise hypothesis testing and selection of statistically optimum degree of smoothing using standard statistical software. We applied a two-dimensional GAM for location to ED data of overlapping calendar time using a locally-weighted regression smoother. To illustrate our methods, we investigated the association between participants’ address and the risk of gastrointestinal illness in Cape Cod, Massachusetts over time. Results The GAM space-time analyses simultaneously smooth in units of distance and time by using the optimum degree of smoothing to create data frames of overlapping time periods and then spatially analyzing each data frame. When resulting maps are viewed in series, each data frame contributes a movie frame, allowing us to visualize changes in magnitude, geographic size, and location of elevated risk smoothed over space and time. In our example data, we observed an underlying geographic pattern of gastrointestinal illness with risks consistently higher in the eastern part of our study area over time and intermittent variations of increased risk during brief periods. Conclusions Spatial-temporal analysis of emergency department data with GAMs can be used to map underlying disease risk at the individual-level and view
Energy Technology Data Exchange (ETDEWEB)
Rankinen, K.; Granlund, K. [Finnish Environmental Inst., Helsinki (Finland); Futter, M. N. [Swedish Univ. of Agricultural Sciences, Uppsala (Sweden)
2013-11-01
The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small. (orig.)
Model sensitivity studies of the decrease in atmospheric carbon tetrachloride
Directory of Open Access Journals (Sweden)
M. P. Chipperfield
2016-12-01
Full Text Available Carbon tetrachloride (CCl4 is an ozone-depleting substance, which is controlled by the Montreal Protocol and for which the atmospheric abundance is decreasing. However, the current observed rate of this decrease is known to be slower than expected based on reported CCl4 emissions and its estimated overall atmospheric lifetime. Here we use a three-dimensional (3-D chemical transport model to investigate the impact on its predicted decay of uncertainties in the rates at which CCl4 is removed from the atmosphere by photolysis, by ocean uptake and by degradation in soils. The largest sink is atmospheric photolysis (74 % of total, but a reported 10 % uncertainty in its combined photolysis cross section and quantum yield has only a modest impact on the modelled rate of CCl4 decay. This is partly due to the limiting effect of the rate of transport of CCl4 from the main tropospheric reservoir to the stratosphere, where photolytic loss occurs. The model suggests large interannual variability in the magnitude of this stratospheric photolysis sink caused by variations in transport. The impact of uncertainty in the minor soil sink (9 % of total is also relatively small. In contrast, the model shows that uncertainty in ocean loss (17 % of total has the largest impact on modelled CCl4 decay due to its sizeable contribution to CCl4 loss and large lifetime uncertainty range (147 to 241 years. With an assumed CCl4 emission rate of 39 Gg year−1, the reference simulation with the best estimate of loss processes still underestimates the observed CCl4 (overestimates the decay over the past 2 decades but to a smaller extent than previous studies. Changes to the rate of CCl4 loss processes, in line with known uncertainties, could bring the model into agreement with in situ surface and remote-sensing measurements, as could an increase in emissions to around 47 Gg year−1. Further progress in constraining the CCl4 budget is partly limited by
International Nuclear Information System (INIS)
Clifton, P.M.
1984-12-01
The deep basalt formations beneath the Hanford Site are being investigated for the Department of Energy (DOE) to assess their suitability as a host medium for a high level nuclear waste repository. Predicted performance of the proposed repository is an important part of the investigation. One of the performance measures being used to gauge the suitability of the host medium is pre-waste-emplacement groundwater travel times to the accessible environment. Many deterministic analyses of groundwater travel times have been completed by Rockwell and other independent organizations. Recently, Rockwell has completed a preliminary stochastic analysis of groundwater travel times. This document presents analyses that show the sensitivity of the results from the previous stochastic travel time study to: (1) scale of representation of model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross-correlation between transmissivity and effective thickness. 40 refs., 29 figs., 6 tabs
DEFF Research Database (Denmark)
Christensen, Claus Dencker; Byskov, Esben
2008-01-01
The postbuckling behavior and imperfection sensitivity of the Shanley-Hutchinson plastic model column introduced by Hutchinson in 1973 are examined. The study covers the initial, buckled state and the advanced postbuckling regime of the geometrically perfect realization as well as its sensitivity...
Protein model discrimination using mutational sensitivity derived from deep sequencing.
Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan
2012-02-08
A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Robert Moss
Full Text Available Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a "virtual population" from which "virtual individuals" can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the "virtual individuals" that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models.
Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S Randall
2012-01-01
Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a "virtual population" from which "virtual individuals" can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the "virtual individuals" that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models.
Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality
Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda
2016-01-01
The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102
Robles Martínez, Ángel; Ruano García, María Victoria; Ribes Bertomeu, José; SECO TORRECILLAS, AURORA; FERRER, J.
2014-01-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic...
DEFF Research Database (Denmark)
Mørkholt, Jakob
1997-01-01
Optimal feedback control of broadband sound radiation from a rectangular baffled panel has been investigated through computer simulations. Special emphasis has been put on the sensitivity of the optimal feedback control to